README.md 964.6 KB
Newer Older
C
Chen Long 已提交
1 2
# 官方模型库

3
飞桨为开发者精选并汇聚了600+面向产业实践的优质模型,分方向汇总如下:
C
Chen Long 已提交
4

X
XiaoguangHu 已提交
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
### 模型更新列表 2023.6.20
| 开发套件            | 模型列表                                                     |
| --------------- | ------------------------------------------------------------ |
| PaddleClas      | [v2.5模型列表](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/model_list.md) |
| PaddleDetection | [v2.6模型列表](https://github.com/paddlepaddle/paddledetection/tree/release/2.6#%E6%A8%A1%E5%9E%8B%E5%BA%93) |
| PaddleSeg       | [v2.8模型列表](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.8/README_CN.md#-%E4%BA%A7%E5%93%81%E7%9F%A9%E9%98%B5) |
| PaddleOCR       | [v2.6模型列表](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_ch/algorithm_overview.md)  [v2.6PP系列模型列表](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_ch/models_list.md) |
| PaddleGAN       | [模型列表](https://github.com/paddlepaddle/paddlegan)        |
| PaddleVideo     | [模型列表](https://github.com/PaddlePaddle/PaddleVideo/tree/develop/docs/zh-CN/model_zoo) |
| PaddleNLP       | [模型列表](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/paddlenlp/transformers) |
| PaddleSpeech    | [模型列表](https://github.com/paddlepaddle/paddlespeech)     |
| PaddleRec       | [模型列表](https://github.com/paddlepaddle/paddlerec)        |
| PARL            | [模型列表](https://github.com/PaddlePaddle/PARL/tree/develop/examples) |
| PGL             | [模型列表](https://github.com/PaddlePaddle/PGL/tree/main/examples) |

20 21

### PaddleClas
C
Chen Long 已提交
22 23 24 25 26 27 28 29
<table>
    <tr>
        <th>序号</th>
        <th>模型简称</th>
        <th>论文名称(链接)</th>
        <th>摘要</th>
        <th>数据集</th>
        <th width='10%'>快速开始</th>
30
        </th>
C
Chen Long 已提交
31 32 33 34
    </tr>
    <tr>
        <td>1</td>
        <td>PPLCNet_x0_25</td>
Z
zhangyubo0722 已提交
35 36
        <td><a href="https://arxiv.org/abs/2109.15099">PP-LCNet: A Lightweight CPU Convolutional Neural Network</a></td>
        <td><details><summary>Abstract</summary><div>We propose a lightweight CPU network based on theMKLDNN acceleration strategy, named PP-LCNet, whichimproves the performance of lightweight models on multi-ple tasks. This paper lists technologies which can improvenetwork accuracy while the latency is almost constant. Withthese improvements, the accuracy of PP-LCNet can greatlysurpass the previous network structure with the same infer-ence time for classification. As shown in Figure 1, it outper-forms the most state-of-the-art models. And for downstreamtasks of computer vision, it also performs very well, such asobject detection, semantic segmentation, etc. All our exper-iments are implemented based on PaddlePaddle1. Code andpretrained models are available at PaddleClas</div></details></td>
C
Chen Long 已提交
37
        <td>ImageNet/Acc 0.5179</td>
Z
zhangyubo0722 已提交
38
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/PP-LCNet.md">快速开始</a></td>
39
        </td>
C
Chen Long 已提交
40 41
    </tr>
    <tr>
42
        <td>2</td>
C
Chen Long 已提交
43
        <td>PPLCNet_x0_35</td>
Z
zhangyubo0722 已提交
44 45
        <td><a href="https://arxiv.org/abs/2109.15099">PP-LCNet: A Lightweight CPU Convolutional Neural Network</a></td>
        <td><details><summary>Abstract</summary><div>We propose a lightweight CPU network based on theMKLDNN acceleration strategy, named PP-LCNet, whichimproves the performance of lightweight models on multi-ple tasks. This paper lists technologies which can improvenetwork accuracy while the latency is almost constant. Withthese improvements, the accuracy of PP-LCNet can greatlysurpass the previous network structure with the same infer-ence time for classification. As shown in Figure 1, it outper-forms the most state-of-the-art models. And for downstreamtasks of computer vision, it also performs very well, such asobject detection, semantic segmentation, etc. All our exper-iments are implemented based on PaddlePaddle1. Code andpretrained models are available at PaddleClas</div></details></td>
C
Chen Long 已提交
46
        <td>ImageNet/Acc 0.5809</td>
Z
zhangyubo0722 已提交
47
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/PP-LCNet.md">快速开始</a></td>
48
        </td>
C
Chen Long 已提交
49 50
    </tr>
    <tr>
51
        <td>3</td>
C
Chen Long 已提交
52
        <td>PPLCNet_x0_5</td>
Z
zhangyubo0722 已提交
53 54
        <td><a href="https://arxiv.org/abs/2109.15099">PP-LCNet: A Lightweight CPU Convolutional Neural Network</a></td>
        <td><details><summary>Abstract</summary><div>We propose a lightweight CPU network based on theMKLDNN acceleration strategy, named PP-LCNet, whichimproves the performance of lightweight models on multi-ple tasks. This paper lists technologies which can improvenetwork accuracy while the latency is almost constant. Withthese improvements, the accuracy of PP-LCNet can greatlysurpass the previous network structure with the same infer-ence time for classification. As shown in Figure 1, it outper-forms the most state-of-the-art models. And for downstreamtasks of computer vision, it also performs very well, such asobject detection, semantic segmentation, etc. All our exper-iments are implemented based on PaddlePaddle1. Code andpretrained models are available at PaddleClas</div></details></td>
C
Chen Long 已提交
55
        <td>ImageNet/Acc 0.6314</td>
Z
zhangyubo0722 已提交
56
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/PP-LCNet.md">快速开始</a></td>
57
        </td>
C
Chen Long 已提交
58 59
    </tr>
    <tr>
60
        <td>4</td>
C
Chen Long 已提交
61
        <td>PPLCNet_x0_75</td>
Z
zhangyubo0722 已提交
62 63
        <td><a href="https://arxiv.org/abs/2109.15099">PP-LCNet: A Lightweight CPU Convolutional Neural Network</a></td>
        <td><details><summary>Abstract</summary><div>We propose a lightweight CPU network based on theMKLDNN acceleration strategy, named PP-LCNet, whichimproves the performance of lightweight models on multi-ple tasks. This paper lists technologies which can improvenetwork accuracy while the latency is almost constant. Withthese improvements, the accuracy of PP-LCNet can greatlysurpass the previous network structure with the same infer-ence time for classification. As shown in Figure 1, it outper-forms the most state-of-the-art models. And for downstreamtasks of computer vision, it also performs very well, such asobject detection, semantic segmentation, etc. All our exper-iments are implemented based on PaddlePaddle1. Code andpretrained models are available at PaddleClas</div></details></td>
C
Chen Long 已提交
64
        <td>ImageNet/Acc 0.6818</td>
Z
zhangyubo0722 已提交
65
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/PP-LCNet.md">快速开始</a></td>
66
        </td>
C
Chen Long 已提交
67 68
    </tr>
    <tr>
69
        <td>5</td>
C
Chen Long 已提交
70
        <td>PPLCNet_x1_0</td>
Z
zhangyubo0722 已提交
71 72
        <td><a href="https://arxiv.org/abs/2109.15099">PP-LCNet: A Lightweight CPU Convolutional Neural Network</a></td>
        <td><details><summary>Abstract</summary><div>We propose a lightweight CPU network based on theMKLDNN acceleration strategy, named PP-LCNet, whichimproves the performance of lightweight models on multi-ple tasks. This paper lists technologies which can improvenetwork accuracy while the latency is almost constant. Withthese improvements, the accuracy of PP-LCNet can greatlysurpass the previous network structure with the same infer-ence time for classification. As shown in Figure 1, it outper-forms the most state-of-the-art models. And for downstreamtasks of computer vision, it also performs very well, such asobject detection, semantic segmentation, etc. All our exper-iments are implemented based on PaddlePaddle1. Code andpretrained models are available at PaddleClas</div></details></td>
C
Chen Long 已提交
73
        <td>ImageNet/Acc 0.7132</td>
Z
zhangyubo0722 已提交
74
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/PP-LCNet.md">快速开始</a></td>
75
        </td>
C
Chen Long 已提交
76 77
    </tr>
    <tr>
78
        <td>6</td>
C
Chen Long 已提交
79
        <td>PPLCNet_x1_5</td>
Z
zhangyubo0722 已提交
80 81
        <td><a href="https://arxiv.org/abs/2109.15099">PP-LCNet: A Lightweight CPU Convolutional Neural Network</a></td>
        <td><details><summary>Abstract</summary><div>We propose a lightweight CPU network based on theMKLDNN acceleration strategy, named PP-LCNet, whichimproves the performance of lightweight models on multi-ple tasks. This paper lists technologies which can improvenetwork accuracy while the latency is almost constant. Withthese improvements, the accuracy of PP-LCNet can greatlysurpass the previous network structure with the same infer-ence time for classification. As shown in Figure 1, it outper-forms the most state-of-the-art models. And for downstreamtasks of computer vision, it also performs very well, such asobject detection, semantic segmentation, etc. All our exper-iments are implemented based on PaddlePaddle1. Code andpretrained models are available at PaddleClas</div></details></td>
C
Chen Long 已提交
82
        <td>ImageNet/Acc 0.7371</td>
Z
zhangyubo0722 已提交
83
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/PP-LCNet.md">快速开始</a></td>
84
        </td>
C
Chen Long 已提交
85 86
    </tr>
    <tr>
87
        <td>7</td>
C
Chen Long 已提交
88
        <td>PPLCNet_x2_0</td>
Z
zhangyubo0722 已提交
89 90
        <td><a href="https://arxiv.org/abs/2109.15099">PP-LCNet: A Lightweight CPU Convolutional Neural Network</a></td>
        <td><details><summary>Abstract</summary><div>We propose a lightweight CPU network based on theMKLDNN acceleration strategy, named PP-LCNet, whichimproves the performance of lightweight models on multi-ple tasks. This paper lists technologies which can improvenetwork accuracy while the latency is almost constant. Withthese improvements, the accuracy of PP-LCNet can greatlysurpass the previous network structure with the same infer-ence time for classification. As shown in Figure 1, it outper-forms the most state-of-the-art models. And for downstreamtasks of computer vision, it also performs very well, such asobject detection, semantic segmentation, etc. All our exper-iments are implemented based on PaddlePaddle1. Code andpretrained models are available at PaddleClas</div></details></td>
C
Chen Long 已提交
91
        <td>ImageNet/Acc 0.7518</td>
Z
zhangyubo0722 已提交
92
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/PP-LCNet.md">快速开始</a></td>
93
        </td>
C
Chen Long 已提交
94 95
    </tr>
    <tr>
96
        <td>8</td>
C
Chen Long 已提交
97
        <td>PPLCNet_x2_5</td>
Z
zhangyubo0722 已提交
98 99
        <td><a href="https://arxiv.org/abs/2109.15099">PP-LCNet: A Lightweight CPU Convolutional Neural Network</a></td>
        <td><details><summary>Abstract</summary><div>We propose a lightweight CPU network based on theMKLDNN acceleration strategy, named PP-LCNet, whichimproves the performance of lightweight models on multi-ple tasks. This paper lists technologies which can improvenetwork accuracy while the latency is almost constant. Withthese improvements, the accuracy of PP-LCNet can greatlysurpass the previous network structure with the same infer-ence time for classification. As shown in Figure 1, it outper-forms the most state-of-the-art models. And for downstreamtasks of computer vision, it also performs very well, such asobject detection, semantic segmentation, etc. All our exper-iments are implemented based on PaddlePaddle1. Code andpretrained models are available at PaddleClas</div></details></td>
C
Chen Long 已提交
100
        <td>ImageNet/Acc 0.766</td>
Z
zhangyubo0722 已提交
101
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/PP-LCNet.md">快速开始</a></td>
102
        </td>
C
Chen Long 已提交
103 104
    </tr>
    <tr>
105
        <td>9</td>
C
Chen Long 已提交
106
        <td>SE_ResNeXt50_32x4d</td>
Z
zhangyubo0722 已提交
107
        <td><a href="https://arxiv.org/abs/1709.01507">Squeeze-and-Excitation Networks</a></td>
C
Chen Long 已提交
108 109
        <td><details><summary>Abstract</summary><div>The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the "Squeeze-and-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We show that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. We further demonstrate that SE blocks bring significant improvements in performance for existing state-of-the-art CNNs at slight additional computational cost. Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to 2.251%, surpassing the winning entry of 2016 by a relative improvement of ~25%. Models and code are available at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.7844</td>
Z
zhangyubo0722 已提交
110
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/SENet.md">快速开始</a></td>
111
        </td>
C
Chen Long 已提交
112 113
    </tr>
    <tr>
Z
zhangyubo0722 已提交
114
        <td>10</td>
C
Chen Long 已提交
115
        <td>SE_ResNet18_vd</td>
Z
zhangyubo0722 已提交
116
        <td><a href="https://arxiv.org/abs/1709.01507">Squeeze-and-Excitation Networks</a></td>
C
Chen Long 已提交
117 118
        <td><details><summary>Abstract</summary><div>The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the "Squeeze-and-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We show that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. We further demonstrate that SE blocks bring significant improvements in performance for existing state-of-the-art CNNs at slight additional computational cost. Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to 2.251%, surpassing the winning entry of 2016 by a relative improvement of ~25%. Models and code are available at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.7333</td>
Z
zhangyubo0722 已提交
119
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/SENet.md">快速开始</a></td>
120
        </td>
C
Chen Long 已提交
121 122
    </tr>
    <tr>
Z
zhangyubo0722 已提交
123
        <td>11</td>
C
Chen Long 已提交
124
        <td>SE_ResNet34_vd</td>
Z
zhangyubo0722 已提交
125
        <td><a href="https://arxiv.org/abs/1709.01507">Squeeze-and-Excitation Networks</a></td>
C
Chen Long 已提交
126 127
        <td><details><summary>Abstract</summary><div>The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the "Squeeze-and-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We show that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. We further demonstrate that SE blocks bring significant improvements in performance for existing state-of-the-art CNNs at slight additional computational cost. Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to 2.251%, surpassing the winning entry of 2016 by a relative improvement of ~25%. Models and code are available at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.7651</td>
Z
zhangyubo0722 已提交
128
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/SENet.md">快速开始</a></td>
129
        </td>
C
Chen Long 已提交
130 131
    </tr>
    <tr>
Z
zhangyubo0722 已提交
132
        <td>12</td>
C
Chen Long 已提交
133
        <td>SE_ResNet50_vd</td>
Z
zhangyubo0722 已提交
134
        <td><a href="https://arxiv.org/abs/1709.01507">Squeeze-and-Excitation Networks</a></td>
C
Chen Long 已提交
135 136
        <td><details><summary>Abstract</summary><div>The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the "Squeeze-and-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We show that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. We further demonstrate that SE blocks bring significant improvements in performance for existing state-of-the-art CNNs at slight additional computational cost. Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to 2.251%, surpassing the winning entry of 2016 by a relative improvement of ~25%. Models and code are available at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.7952</td>
Z
zhangyubo0722 已提交
137
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/SENet.md">快速开始</a></td>
138
        </td>
C
Chen Long 已提交
139 140
    </tr>
    <tr>
Z
zhangyubo0722 已提交
141
        <td>13</td>
C
Chen Long 已提交
142
        <td>HRNet_W18_C</td>
Z
zhangyubo0722 已提交
143
        <td><a href="https://arxiv.org/abs/1908.07919">Deep High-Resolution Representation Learning for Visual Recognition</a></td>
C
Chen Long 已提交
144 145
        <td><details><summary>Abstract</summary><div>High-resolution representations are essential for position-sensitive vision problems, such as human pose estimation, semantic segmentation, and object detection. Existing state-of-the-art frameworks first encode the input image as a low-resolution representation through a subnetwork that is formed by connecting high-to-low resolution convolutions \emph{in series} (e.g., ResNet, VGGNet), and then recover the high-resolution representation from the encoded low-resolution representation. Instead, our proposed network, named as High-Resolution Network (HRNet), maintains high-resolution representations through the whole process. There are two key characteristics: (i) Connect the high-to-low resolution convolution streams \emph{in parallel}; (ii) Repeatedly exchange the information across resolutions. The benefit is that the resulting representation is semantically richer and spatially more precise. We show the superiority of the proposed HRNet in a wide range of applications, including human pose estimation, semantic segmentation, and object detection, suggesting that the HRNet is a stronger backbone for computer vision problems. All the codes are available at~{\url{this https URL}}. </div></details></td>
        <td>ImageNet/Acc 0.7692</td>
Z
zhangyubo0722 已提交
146
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/HRNet.md">快速开始</a></td>
147
        </td>
C
Chen Long 已提交
148 149
    </tr>
    <tr>
Z
zhangyubo0722 已提交
150
        <td>14</td>
C
Chen Long 已提交
151
        <td>HRNet_W30_C</td>
Z
zhangyubo0722 已提交
152
        <td><a href="https://arxiv.org/abs/1908.07919">Deep High-Resolution Representation Learning for Visual Recognition</a></td>
C
Chen Long 已提交
153 154
        <td><details><summary>Abstract</summary><div>High-resolution representations are essential for position-sensitive vision problems, such as human pose estimation, semantic segmentation, and object detection. Existing state-of-the-art frameworks first encode the input image as a low-resolution representation through a subnetwork that is formed by connecting high-to-low resolution convolutions \emph{in series} (e.g., ResNet, VGGNet), and then recover the high-resolution representation from the encoded low-resolution representation. Instead, our proposed network, named as High-Resolution Network (HRNet), maintains high-resolution representations through the whole process. There are two key characteristics: (i) Connect the high-to-low resolution convolution streams \emph{in parallel}; (ii) Repeatedly exchange the information across resolutions. The benefit is that the resulting representation is semantically richer and spatially more precise. We show the superiority of the proposed HRNet in a wide range of applications, including human pose estimation, semantic segmentation, and object detection, suggesting that the HRNet is a stronger backbone for computer vision problems. All the codes are available at~{\url{this https URL}}. </div></details></td>
        <td>ImageNet/Acc 0.7804</td>
Z
zhangyubo0722 已提交
155
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/HRNet.md">快速开始</a></td>
156
        </td>
C
Chen Long 已提交
157 158
    </tr>
    <tr>
Z
zhangyubo0722 已提交
159
        <td>15</td>
C
Chen Long 已提交
160
        <td>HRNet_W32_C</td>
Z
zhangyubo0722 已提交
161
        <td><a href="https://arxiv.org/abs/1908.07919">Deep High-Resolution Representation Learning for Visual Recognition</a></td>
C
Chen Long 已提交
162 163
        <td><details><summary>Abstract</summary><div>High-resolution representations are essential for position-sensitive vision problems, such as human pose estimation, semantic segmentation, and object detection. Existing state-of-the-art frameworks first encode the input image as a low-resolution representation through a subnetwork that is formed by connecting high-to-low resolution convolutions \emph{in series} (e.g., ResNet, VGGNet), and then recover the high-resolution representation from the encoded low-resolution representation. Instead, our proposed network, named as High-Resolution Network (HRNet), maintains high-resolution representations through the whole process. There are two key characteristics: (i) Connect the high-to-low resolution convolution streams \emph{in parallel}; (ii) Repeatedly exchange the information across resolutions. The benefit is that the resulting representation is semantically richer and spatially more precise. We show the superiority of the proposed HRNet in a wide range of applications, including human pose estimation, semantic segmentation, and object detection, suggesting that the HRNet is a stronger backbone for computer vision problems. All the codes are available at~{\url{this https URL}}. </div></details></td>
        <td>ImageNet/Acc 0.7828</td>
Z
zhangyubo0722 已提交
164
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/HRNet.md">快速开始</a></td>
165
        </td>
C
Chen Long 已提交
166 167
    </tr>
    <tr>
Z
zhangyubo0722 已提交
168
        <td>16</td>
C
Chen Long 已提交
169
        <td>HRNet_W40_C</td>
Z
zhangyubo0722 已提交
170
        <td><a href="https://arxiv.org/abs/1908.07919">Deep High-Resolution Representation Learning for Visual Recognition</a></td>
C
Chen Long 已提交
171 172
        <td><details><summary>Abstract</summary><div>High-resolution representations are essential for position-sensitive vision problems, such as human pose estimation, semantic segmentation, and object detection. Existing state-of-the-art frameworks first encode the input image as a low-resolution representation through a subnetwork that is formed by connecting high-to-low resolution convolutions \emph{in series} (e.g., ResNet, VGGNet), and then recover the high-resolution representation from the encoded low-resolution representation. Instead, our proposed network, named as High-Resolution Network (HRNet), maintains high-resolution representations through the whole process. There are two key characteristics: (i) Connect the high-to-low resolution convolution streams \emph{in parallel}; (ii) Repeatedly exchange the information across resolutions. The benefit is that the resulting representation is semantically richer and spatially more precise. We show the superiority of the proposed HRNet in a wide range of applications, including human pose estimation, semantic segmentation, and object detection, suggesting that the HRNet is a stronger backbone for computer vision problems. All the codes are available at~{\url{this https URL}}. </div></details></td>
        <td>ImageNet/Acc 0.7877</td>
Z
zhangyubo0722 已提交
173
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/HRNet.md">快速开始</a></td>
174
        </td>
C
Chen Long 已提交
175 176
    </tr>
    <tr>
Z
zhangyubo0722 已提交
177
        <td>17</td>
C
Chen Long 已提交
178
        <td>HRNet_W44_C</td>
Z
zhangyubo0722 已提交
179
        <td><a href="https://arxiv.org/abs/1908.07919">Deep High-Resolution Representation Learning for Visual Recognition</a></td>
C
Chen Long 已提交
180 181
        <td><details><summary>Abstract</summary><div>High-resolution representations are essential for position-sensitive vision problems, such as human pose estimation, semantic segmentation, and object detection. Existing state-of-the-art frameworks first encode the input image as a low-resolution representation through a subnetwork that is formed by connecting high-to-low resolution convolutions \emph{in series} (e.g., ResNet, VGGNet), and then recover the high-resolution representation from the encoded low-resolution representation. Instead, our proposed network, named as High-Resolution Network (HRNet), maintains high-resolution representations through the whole process. There are two key characteristics: (i) Connect the high-to-low resolution convolution streams \emph{in parallel}; (ii) Repeatedly exchange the information across resolutions. The benefit is that the resulting representation is semantically richer and spatially more precise. We show the superiority of the proposed HRNet in a wide range of applications, including human pose estimation, semantic segmentation, and object detection, suggesting that the HRNet is a stronger backbone for computer vision problems. All the codes are available at~{\url{this https URL}}. </div></details></td>
        <td>ImageNet/Acc 0.79</td>
Z
zhangyubo0722 已提交
182
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/HRNet.md">快速开始</a></td>
183
        </td>
C
Chen Long 已提交
184 185
    </tr>
    <tr>
Z
zhangyubo0722 已提交
186
        <td>18</td>
C
Chen Long 已提交
187
        <td>HRNet_W48_C</td>
Z
zhangyubo0722 已提交
188
        <td><a href="https://arxiv.org/abs/1908.07919">Deep High-Resolution Representation Learning for Visual Recognition</a></td>
C
Chen Long 已提交
189 190
        <td><details><summary>Abstract</summary><div>High-resolution representations are essential for position-sensitive vision problems, such as human pose estimation, semantic segmentation, and object detection. Existing state-of-the-art frameworks first encode the input image as a low-resolution representation through a subnetwork that is formed by connecting high-to-low resolution convolutions \emph{in series} (e.g., ResNet, VGGNet), and then recover the high-resolution representation from the encoded low-resolution representation. Instead, our proposed network, named as High-Resolution Network (HRNet), maintains high-resolution representations through the whole process. There are two key characteristics: (i) Connect the high-to-low resolution convolution streams \emph{in parallel}; (ii) Repeatedly exchange the information across resolutions. The benefit is that the resulting representation is semantically richer and spatially more precise. We show the superiority of the proposed HRNet in a wide range of applications, including human pose estimation, semantic segmentation, and object detection, suggesting that the HRNet is a stronger backbone for computer vision problems. All the codes are available at~{\url{this https URL}}. </div></details></td>
        <td>ImageNet/Acc 0.7895</td>
Z
zhangyubo0722 已提交
191
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/HRNet.md">快速开始</a></td>
192
        </td>
C
Chen Long 已提交
193 194
    </tr>
    <tr>
Z
zhangyubo0722 已提交
195
        <td>19</td>
C
Chen Long 已提交
196
        <td>HRNet_W64_C</td>
Z
zhangyubo0722 已提交
197
        <td><a href="https://arxiv.org/abs/1908.07919">Deep High-Resolution Representation Learning for Visual Recognition</a></td>
C
Chen Long 已提交
198 199
        <td><details><summary>Abstract</summary><div>High-resolution representations are essential for position-sensitive vision problems, such as human pose estimation, semantic segmentation, and object detection. Existing state-of-the-art frameworks first encode the input image as a low-resolution representation through a subnetwork that is formed by connecting high-to-low resolution convolutions \emph{in series} (e.g., ResNet, VGGNet), and then recover the high-resolution representation from the encoded low-resolution representation. Instead, our proposed network, named as High-Resolution Network (HRNet), maintains high-resolution representations through the whole process. There are two key characteristics: (i) Connect the high-to-low resolution convolution streams \emph{in parallel}; (ii) Repeatedly exchange the information across resolutions. The benefit is that the resulting representation is semantically richer and spatially more precise. We show the superiority of the proposed HRNet in a wide range of applications, including human pose estimation, semantic segmentation, and object detection, suggesting that the HRNet is a stronger backbone for computer vision problems. All the codes are available at~{\url{this https URL}}. </div></details></td>
        <td>ImageNet/Acc 0.793</td>
Z
zhangyubo0722 已提交
200
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/HRNet.md">快速开始</a></td>
201
        </td>
C
Chen Long 已提交
202 203
    </tr>
    <tr>
Z
zhangyubo0722 已提交
204
        <td>20</td>
C
Chen Long 已提交
205
        <td>SE_ResNeXt101_32x4d</td>
Z
zhangyubo0722 已提交
206
        <td><a href="https://arxiv.org/abs/1709.01507">Squeeze-and-Excitation Networks</a></td>
C
Chen Long 已提交
207 208
        <td><details><summary>Abstract</summary><div>The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the "Squeeze-and-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We show that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. We further demonstrate that SE blocks bring significant improvements in performance for existing state-of-the-art CNNs at slight additional computational cost. Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to 2.251%, surpassing the winning entry of 2016 by a relative improvement of ~25%. Models and code are available at this https URL.</div></details></td>
        <td>ImageNet/Acc 0.7939</td>
Z
zhangyubo0722 已提交
209
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/SENet.md">快速开始</a></td>
210
        </td>
C
Chen Long 已提交
211 212
    </tr>
    <tr>
Z
zhangyubo0722 已提交
213
        <td>21</td>
C
Chen Long 已提交
214
        <td>SENet154_vd</td>
Z
zhangyubo0722 已提交
215
        <td><a href="https://arxiv.org/abs/1709.01507">Squeeze-and-Excitation Networks</a></td>
C
Chen Long 已提交
216 217
        <td><details><summary>Abstract</summary><div>The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the "Squeeze-and-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We show that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. We further demonstrate that SE blocks bring significant improvements in performance for existing state-of-the-art CNNs at slight additional computational cost. Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to 2.251%, surpassing the winning entry of 2016 by a relative improvement of ~25%. Models and code are available at this https URL.</div></details></td>
        <td>ImageNet/Acc 0.814</td>
Z
zhangyubo0722 已提交
218
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/SENet.md">快速开始</a></td>
219
        </td>
C
Chen Long 已提交
220 221
    </tr>
    <tr>
Z
zhangyubo0722 已提交
222
        <td>22</td>
C
Chen Long 已提交
223
        <td>GoogLeNet</td>
Z
zhangyubo0722 已提交
224 225
        <td><a href="https://arxiv.org/abs/1409.4842">Going Deeper with Convolutions</a></td>
        <td><details><summary>Abstract</summary><div>We propose a deep convolutional neural network architecture codenamed "Inception", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.  </div></details></td>
C
Chen Long 已提交
226
        <td>ImageNet/Acc 0.707</td>
Z
zhangyubo0722 已提交
227
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Inception.md">快速开始</a></td>
228
        </td>
C
Chen Long 已提交
229 230
    </tr>
    <tr>
Z
zhangyubo0722 已提交
231
        <td>23</td>
C
Chen Long 已提交
232
        <td>InceptionV3</td>
Z
zhangyubo0722 已提交
233 234
        <td><a href="https://arxiv.org/abs/1512.00567">Rethinking the Inception Architecture for Computer Vision</a></td>
        <td><details><summary>Abstract</summary><div>Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5% top-5 error on the validation set (3.6% error on the test set) and 17.3% top-1 error on the validation set.</div></details></td>
C
Chen Long 已提交
235
        <td>ImageNet/Acc 0.7914</td>
Z
zhangyubo0722 已提交
236
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Inception.md">快速开始</a></td>
237
        </td>
C
Chen Long 已提交
238 239
    </tr>
    <tr>
Z
zhangyubo0722 已提交
240
        <td>24</td>
C
Chen Long 已提交
241
        <td>InceptionV4</td>
242
        <td><a href="https://paperswithcode.com/model/inception-v4?variant=inception-v4-1">Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning</a></td>
C
Chen Long 已提交
243 244
        <td><details><summary>Abstract</summary><div>Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge </div></details></td>
        <td>ImageNet/Acc 0.8077</td>
Z
zhangyubo0722 已提交
245
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Inception.md">快速开始</a></td>
246
        </td>
C
Chen Long 已提交
247 248
    </tr>
    <tr>
Z
zhangyubo0722 已提交
249
        <td>25</td>
C
Chen Long 已提交
250
        <td>ResNet18</td>
Z
zhangyubo0722 已提交
251
        <td><a href="https://arxiv.org/abs/1512.03385">Deep Residual Learning for Image Recognition</a></td>
C
Chen Long 已提交
252 253
        <td><details><summary>Abstract</summary><div>Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. </div></details></td>
        <td>ImageNet/Acc 0.7098</td>
Z
zhangyubo0722 已提交
254
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNet.md">快速开始</a></td>
255
        </td>
C
Chen Long 已提交
256 257
    </tr>
    <tr>
Z
zhangyubo0722 已提交
258
        <td>26</td>
C
Chen Long 已提交
259
        <td>ResNet18_vd</td>
Z
zhangyubo0722 已提交
260
        <td><a href="https://arxiv.org/abs/1512.03385">Deep Residual Learning for Image Recognition</a></td>
C
Chen Long 已提交
261 262
        <td><details><summary>Abstract</summary><div>Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. </div></details></td>
        <td>ImageNet/Acc 0.7226</td>
Z
zhangyubo0722 已提交
263
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNet.md">快速开始</a></td>
264
        </td>
C
Chen Long 已提交
265 266
    </tr>
    <tr>
Z
zhangyubo0722 已提交
267
        <td>27</td>
C
Chen Long 已提交
268
        <td>ResNet34</td>
Z
zhangyubo0722 已提交
269
        <td><a href="https://arxiv.org/abs/1512.03385">Deep Residual Learning for Image Recognition</a></td>
C
Chen Long 已提交
270 271
        <td><details><summary>Abstract</summary><div>Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. </div></details></td>
        <td>ImageNet/Acc 0.7457</td>
Z
zhangyubo0722 已提交
272
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNet.md">快速开始</a></td>
273
        </td>
C
Chen Long 已提交
274 275
    </tr>
    <tr>
Z
zhangyubo0722 已提交
276
        <td>28</td>
C
Chen Long 已提交
277
        <td>ResNet34_vd</td>
Z
zhangyubo0722 已提交
278
        <td><a href="https://arxiv.org/abs/1512.03385">Deep Residual Learning for Image Recognition</a></td>
C
Chen Long 已提交
279 280
        <td><details><summary>Abstract</summary><div>Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. </div></details></td>
        <td>ImageNet/Acc 0.7598</td>
Z
zhangyubo0722 已提交
281
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNet.md">快速开始</a></td>
282
        </td>
C
Chen Long 已提交
283 284
    </tr>
    <tr>
Z
zhangyubo0722 已提交
285
        <td>29</td>
C
Chen Long 已提交
286
        <td>ResNet50</td>
Z
zhangyubo0722 已提交
287
        <td><a href="https://arxiv.org/abs/1512.03385">Deep Residual Learning for Image Recognition</a></td>
C
Chen Long 已提交
288 289
        <td><details><summary>Abstract</summary><div>Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. </div></details></td>
        <td>ImageNet/Acc 0.765</td>
Z
zhangyubo0722 已提交
290
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNet.md">快速开始</a></td>
291
        </td>
C
Chen Long 已提交
292 293
    </tr>
    <tr>
Z
zhangyubo0722 已提交
294
        <td>30</td>
C
Chen Long 已提交
295
        <td>ResNet50_vd</td>
Z
zhangyubo0722 已提交
296
        <td><a href="https://arxiv.org/abs/1512.03385">Deep Residual Learning for Image Recognition</a></td>
C
Chen Long 已提交
297 298
        <td><details><summary>Abstract</summary><div>Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. </div></details></td>
        <td>ImageNet/Acc 0.7912</td>
Z
zhangyubo0722 已提交
299
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNet.md">快速开始</a></td>
300
        </td>
C
Chen Long 已提交
301 302
    </tr>
    <tr>
Z
zhangyubo0722 已提交
303
        <td>31</td>
C
Chen Long 已提交
304
        <td>ResNet50_vd-FPGM</td>
Z
zhangyubo0722 已提交
305
        <td><a href="https://arxiv.org/abs/1512.03385">Deep Residual Learning for Image Recognition</a></td>
C
Chen Long 已提交
306 307
        <td><details><summary>Abstract</summary><div>Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. </div></details></td>
        <td>-</td>
Z
zhangyubo0722 已提交
308
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNet.md">快速开始</a></td>
309
        </td>
C
Chen Long 已提交
310 311
    </tr>
    <tr>
Z
zhangyubo0722 已提交
312
        <td>32</td>
313
        <td>ResNet50_vd_PACT</td>
Z
zhangyubo0722 已提交
314
        <td><a href="https://arxiv.org/abs/1512.03385">Deep Residual Learning for Image Recognition</a></td>
C
Chen Long 已提交
315 316
        <td><details><summary>Abstract</summary><div>Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. </div></details></td>
        <td>-</td>
Z
zhangyubo0722 已提交
317
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNet.md">快速开始</a></td>
318
        </td>
C
Chen Long 已提交
319 320
    </tr>
    <tr>
Z
zhangyubo0722 已提交
321
        <td>33</td>
322
        <td>ResNet50_vd_KL</td>
Z
zhangyubo0722 已提交
323
        <td><a href="https://arxiv.org/abs/1512.03385">Deep Residual Learning for Image Recognition</a></td>
C
Chen Long 已提交
324 325
        <td><details><summary>Abstract</summary><div>Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. </div></details></td>
        <td>-</td>
Z
zhangyubo0722 已提交
326
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNet.md">快速开始</a></td>
327
        </td>
C
Chen Long 已提交
328 329
    </tr>
    <tr>
Z
zhangyubo0722 已提交
330
        <td>34</td>
C
Chen Long 已提交
331
        <td>ResNet101</td>
Z
zhangyubo0722 已提交
332
        <td><a href="https://arxiv.org/abs/1512.03385">Deep Residual Learning for Image Recognition</a></td>
C
Chen Long 已提交
333 334
        <td><details><summary>Abstract</summary><div> This paper presents a novel adaptively connected neural network (ACNet) to improve the traditional convolutional neural networks (CNNs) {in} two aspects. First, ACNet employs a flexible way to switch global and local inference in processing the internal feature representations by adaptively determining the connection status among the feature nodes (e.g., pixels of the feature maps) \footnote{In a computer vision domain, a node refers to a pixel of a feature map{, while} in {the} graph domain, a node denotes a graph node.}. We can show that existing CNNs, the classical multilayer perceptron (MLP), and the recently proposed non-local network (NLN) \cite{nonlocalnn17} are all special cases of ACNet. Second, ACNet is also capable of handling non-Euclidean data. Extensive experimental analyses on {a variety of benchmarks (i.e.,} ImageNet-1k classification, COCO 2017 detection and segmentation, CUHK03 person re-identification, CIFAR analysis, and Cora document categorization) demonstrate that {ACNet} cannot only achieve state-of-the-art performance but also overcome the limitation of the conventional MLP and CNN \footnote{Corresponding author: Liang Lin (linliang@ieee.org)}. The code is available at \url{this https URL}. </div></details></td>
        <td>ImageNet/Acc 0.7756</td>
Z
zhangyubo0722 已提交
335
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNet.md">快速开始</a></td>
336
        </td>
C
Chen Long 已提交
337 338
    </tr>
    <tr>
Z
zhangyubo0722 已提交
339
        <td>35</td>
C
Chen Long 已提交
340
        <td>ResNet101_vd</td>
Z
zhangyubo0722 已提交
341
        <td><a href="https://arxiv.org/abs/1512.03385">Deep Residual Learning for Image Recognition</a></td>
C
Chen Long 已提交
342 343
        <td><details><summary>Abstract</summary><div>Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. </div></details></td>
        <td>ImageNet/Acc 0.8017</td>
Z
zhangyubo0722 已提交
344
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNet.md">快速开始</a></td>
345
        </td>
C
Chen Long 已提交
346 347
    </tr>
    <tr>
Z
zhangyubo0722 已提交
348
        <td>36</td>
C
Chen Long 已提交
349
        <td>ResNet152</td>
Z
zhangyubo0722 已提交
350
        <td><a href="https://arxiv.org/abs/1512.03385">Deep Residual Learning for Image Recognition</a></td>
C
Chen Long 已提交
351 352
        <td><details><summary>Abstract</summary><div>Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. </div></details></td>
        <td>ImageNet/Acc 0.7826</td>
Z
zhangyubo0722 已提交
353
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNet.md">快速开始</a></td>
354
        </td>
C
Chen Long 已提交
355 356
    </tr>
    <tr>
Z
zhangyubo0722 已提交
357
        <td>37</td>
C
Chen Long 已提交
358
        <td>ResNet152_vd</td>
Z
zhangyubo0722 已提交
359
        <td><a href="https://arxiv.org/abs/1512.03385">Deep Residual Learning for Image Recognition</a></td>
C
Chen Long 已提交
360 361
        <td><details><summary>Abstract</summary><div>Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. </div></details></td>
        <td>ImageNet/Acc 0.8059</td>
Z
zhangyubo0722 已提交
362
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNet.md">快速开始</a></td>
363
        </td>
C
Chen Long 已提交
364 365
    </tr>
    <tr>
Z
zhangyubo0722 已提交
366
        <td>38</td>
C
Chen Long 已提交
367
        <td>ResNet200_vd</td>
Z
zhangyubo0722 已提交
368
        <td><a href="https://arxiv.org/abs/1512.03385">Deep Residual Learning for Image Recognition</a></td>
C
Chen Long 已提交
369 370
        <td><details><summary>Abstract</summary><div>Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. </div></details></td>
        <td>ImageNet/Acc 0.8093</td>
Z
zhangyubo0722 已提交
371
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNet.md">快速开始</a></td>
372
        </td>
C
Chen Long 已提交
373 374
    </tr>
    <tr>
Z
zhangyubo0722 已提交
375
        <td>39</td>
C
Chen Long 已提交
376
        <td>Res2Net50_26w_4s</td>
Z
zhangyubo0722 已提交
377 378
        <td><a href="https://arxiv.org/abs/1904.01169">Res2Net: A New Multi-scale Backbone Architecture</a></td>
        <td><details><summary>Abstract</summary><div>Representing features at multiple scales is of great importance for numerous vision tasks. Recent advances in backbone convolutional neural networks (CNNs) continually demonstrate stronger multi-scale representation ability, leading to consistent performance gains on a wide range of applications. However, most existing methods represent the multi-scale features in a layer-wise manner. In this paper, we propose a novel building block for CNNs, namely Res2Net, by constructing hierarchical residual-like connections within one single residual block. The Res2Net represents multi-scale features at a granular level and increases the range of receptive fields for each network layer. The proposed Res2Net block can be plugged into the state-of-the-art backbone CNN models, e.g., ResNet, ResNeXt, and DLA. We evaluate the Res2Net block on all these models and demonstrate consistent performance gains over baseline models on widely-used datasets, e.g., CIFAR-100 and ImageNet. Further ablation studies and experimental results on representative computer vision tasks, i.e., object detection, class activation mapping, and salient object detection, further verify the superiority of the Res2Net over the state-of-the-art baseline methods. The source code and trained models are available on this https URL. </div></details></td>
C
Chen Long 已提交
379
        <td>ImageNet/Acc 0.7933</td>
Z
zhangyubo0722 已提交
380
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Res2Net.md">快速开始</a></td>
381
        </td>
C
Chen Long 已提交
382 383
    </tr>
    <tr>
Z
zhangyubo0722 已提交
384
        <td>40</td>
C
Chen Long 已提交
385
        <td>Res2Net50_14w_8s</td>
Z
zhangyubo0722 已提交
386 387
        <td><a href="https://arxiv.org/abs/1904.01169">Res2Net: A New Multi-scale Backbone Architecture</a></td>
        <td><details><summary>Abstract</summary><div>Representing features at multiple scales is of great importance for numerous vision tasks. Recent advances in backbone convolutional neural networks (CNNs) continually demonstrate stronger multi-scale representation ability, leading to consistent performance gains on a wide range of applications. However, most existing methods represent the multi-scale features in a layer-wise manner. In this paper, we propose a novel building block for CNNs, namely Res2Net, by constructing hierarchical residual-like connections within one single residual block. The Res2Net represents multi-scale features at a granular level and increases the range of receptive fields for each network layer. The proposed Res2Net block can be plugged into the state-of-the-art backbone CNN models, e.g., ResNet, ResNeXt, and DLA. We evaluate the Res2Net block on all these models and demonstrate consistent performance gains over baseline models on widely-used datasets, e.g., CIFAR-100 and ImageNet. Further ablation studies and experimental results on representative computer vision tasks, i.e., object detection, class activation mapping, and salient object detection, further verify the superiority of the Res2Net over the state-of-the-art baseline methods. The source code and trained models are available on this https URL. </div></details></td>
C
Chen Long 已提交
388
        <td>ImageNet/Acc 0.7946</td>
Z
zhangyubo0722 已提交
389
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Res2Net.md">快速开始</a></td>
390
        </td>
C
Chen Long 已提交
391 392
    </tr>
    <tr>
Z
zhangyubo0722 已提交
393
        <td>41</td>
C
Chen Long 已提交
394
        <td>Res2Net50_vd_26w_4s</td>
Z
zhangyubo0722 已提交
395 396
        <td><a href="https://arxiv.org/abs/1904.01169">Res2Net: A New Multi-scale Backbone Architecture</a></td>
        <td><details><summary>Abstract</summary><div>Representing features at multiple scales is of great importance for numerous vision tasks. Recent advances in backbone convolutional neural networks (CNNs) continually demonstrate stronger multi-scale representation ability, leading to consistent performance gains on a wide range of applications. However, most existing methods represent the multi-scale features in a layer-wise manner. In this paper, we propose a novel building block for CNNs, namely Res2Net, by constructing hierarchical residual-like connections within one single residual block. The Res2Net represents multi-scale features at a granular level and increases the range of receptive fields for each network layer. The proposed Res2Net block can be plugged into the state-of-the-art backbone CNN models, e.g., ResNet, ResNeXt, and DLA. We evaluate the Res2Net block on all these models and demonstrate consistent performance gains over baseline models on widely-used datasets, e.g., CIFAR-100 and ImageNet. Further ablation studies and experimental results on representative computer vision tasks, i.e., object detection, class activation mapping, and salient object detection, further verify the superiority of the Res2Net over the state-of-the-art baseline methods. The source code and trained models are available on this https URL. </div></details></td>
C
Chen Long 已提交
397
        <td>ImageNet/Acc 0.7975</td>
Z
zhangyubo0722 已提交
398
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Res2Net.md">快速开始</a></td>
399
        </td>
C
Chen Long 已提交
400 401
    </tr>
    <tr>
Z
zhangyubo0722 已提交
402
        <td>42</td>
C
Chen Long 已提交
403
        <td>Res2Net101_vd_26w_4s</td>
Z
zhangyubo0722 已提交
404 405
        <td><a href="https://arxiv.org/abs/1904.01169">Res2Net: A New Multi-scale Backbone Architecture</a></td>
        <td><details><summary>Abstract</summary><div>Representing features at multiple scales is of great importance for numerous vision tasks. Recent advances in backbone convolutional neural networks (CNNs) continually demonstrate stronger multi-scale representation ability, leading to consistent performance gains on a wide range of applications. However, most existing methods represent the multi-scale features in a layer-wise manner. In this paper, we propose a novel building block for CNNs, namely Res2Net, by constructing hierarchical residual-like connections within one single residual block. The Res2Net represents multi-scale features at a granular level and increases the range of receptive fields for each network layer. The proposed Res2Net block can be plugged into the state-of-the-art backbone CNN models, e.g., ResNet, ResNeXt, and DLA. We evaluate the Res2Net block on all these models and demonstrate consistent performance gains over baseline models on widely-used datasets, e.g., CIFAR-100 and ImageNet. Further ablation studies and experimental results on representative computer vision tasks, i.e., object detection, class activation mapping, and salient object detection, further verify the superiority of the Res2Net over the state-of-the-art baseline methods. The source code and trained models are available on this https URL. </div></details></td>
C
Chen Long 已提交
406
        <td>ImageNet/Acc 0.8064</td>
Z
zhangyubo0722 已提交
407
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Res2Net.md">快速开始</a></td>
408
        </td>
C
Chen Long 已提交
409 410
    </tr>
    <tr>
Z
zhangyubo0722 已提交
411
        <td>43</td>
C
Chen Long 已提交
412
        <td>Res2Net200_vd_26w_4s</td>
Z
zhangyubo0722 已提交
413 414
        <td><a href="https://arxiv.org/abs/1904.01169">Res2Net: A New Multi-scale Backbone Architecture</a></td>
        <td><details><summary>Abstract</summary><div>Representing features at multiple scales is of great importance for numerous vision tasks. Recent advances in backbone convolutional neural networks (CNNs) continually demonstrate stronger multi-scale representation ability, leading to consistent performance gains on a wide range of applications. However, most existing methods represent the multi-scale features in a layer-wise manner. In this paper, we propose a novel building block for CNNs, namely Res2Net, by constructing hierarchical residual-like connections within one single residual block. The Res2Net represents multi-scale features at a granular level and increases the range of receptive fields for each network layer. The proposed Res2Net block can be plugged into the state-of-the-art backbone CNN models, e.g., ResNet, ResNeXt, and DLA. We evaluate the Res2Net block on all these models and demonstrate consistent performance gains over baseline models on widely-used datasets, e.g., CIFAR-100 and ImageNet. Further ablation studies and experimental results on representative computer vision tasks, i.e., object detection, class activation mapping, and salient object detection, further verify the superiority of the Res2Net over the state-of-the-art baseline methods. The source code and trained models are available on this https URL. </div></details></td>
C
Chen Long 已提交
415
        <td>ImageNet/Acc 0.8121</td>
Z
zhangyubo0722 已提交
416
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Res2Net.md">快速开始</a></td>
417
        </td>
C
Chen Long 已提交
418 419
    </tr>
    <tr>
Z
zhangyubo0722 已提交
420
        <td>44</td>
C
Chen Long 已提交
421
        <td>ResNeXt50_32x4d</td>
Z
zhangyubo0722 已提交
422 423
        <td><a href="https://arxiv.org/abs/1611.05431">Aggregated Residual Transformations for Deep Neural Networks</a></td>
        <td><details><summary>Abstract</summary><div>We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online. </div></details></td>
C
Chen Long 已提交
424
        <td>ImageNet/Acc 0.7775</td>
Z
zhangyubo0722 已提交
425
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNeXt.md">快速开始</a></td>
426
        </td>
C
Chen Long 已提交
427 428
    </tr>
    <tr>
Z
zhangyubo0722 已提交
429
        <td>45</td>
C
Chen Long 已提交
430
        <td>ResNeXt50_64x4d</td>
Z
zhangyubo0722 已提交
431 432
        <td><a href="https://arxiv.org/abs/1611.05431">Aggregated Residual Transformations for Deep Neural Networks</a></td>
        <td><details><summary>Abstract</summary><div>We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online. </div></details></td>
C
Chen Long 已提交
433
        <td>ImageNet/Acc 0.7843</td>
Z
zhangyubo0722 已提交
434
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNeXt.md">快速开始</a></td>
435
        </td>
C
Chen Long 已提交
436 437
    </tr>
    <tr>
Z
zhangyubo0722 已提交
438
        <td>46</td>
C
Chen Long 已提交
439
        <td>ResNeXt50_vd_32x4d</td>
Z
zhangyubo0722 已提交
440 441
        <td><a href="https://arxiv.org/abs/1611.05431">Aggregated Residual Transformations for Deep Neural Networks</a></td>
        <td><details><summary>Abstract</summary><div>We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online. </div></details></td>
C
Chen Long 已提交
442
        <td>ImageNet/Acc 0.7956</td>
Z
zhangyubo0722 已提交
443
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNeXt.md">快速开始</a></td>
444
        </td>
C
Chen Long 已提交
445 446
    </tr>
    <tr>
Z
zhangyubo0722 已提交
447
        <td>47</td>
C
Chen Long 已提交
448
        <td>ResNeXt50_vd_64x4d</td>
Z
zhangyubo0722 已提交
449 450
        <td><a href="https://arxiv.org/abs/1611.05431">Aggregated Residual Transformations for Deep Neural Networks</a></td>
        <td><details><summary>Abstract</summary><div>We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online. </div></details></td>
C
Chen Long 已提交
451
        <td>ImageNet/Acc 0.8012</td>
Z
zhangyubo0722 已提交
452
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNeXt.md">快速开始</a></td>
453
        </td>
C
Chen Long 已提交
454 455
    </tr>
    <tr>
Z
zhangyubo0722 已提交
456
        <td>48</td>
C
Chen Long 已提交
457
        <td>ResNeXt101_32x4d</td>
Z
zhangyubo0722 已提交
458 459
        <td><a href="https://arxiv.org/abs/1611.05431">Aggregated Residual Transformations for Deep Neural Networks</a></td>
        <td><details><summary>Abstract</summary><div>We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online. </div></details></td>
C
Chen Long 已提交
460
        <td>ImageNet/Acc 0.7865</td>
Z
zhangyubo0722 已提交
461
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNeXt.md">快速开始</a></td>
462
        </td>
C
Chen Long 已提交
463 464
    </tr>
    <tr>
Z
zhangyubo0722 已提交
465
        <td>49</td>
C
Chen Long 已提交
466
        <td>ResNeXt101_64x4d</td>
Z
zhangyubo0722 已提交
467 468
        <td><a href="https://arxiv.org/abs/1611.05431">Aggregated Residual Transformations for Deep Neural Networks</a></td>
        <td><details><summary>Abstract</summary><div>We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online. </div></details></td>
C
Chen Long 已提交
469
        <td>ImageNet/Acc 0.8033</td>
Z
zhangyubo0722 已提交
470
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNeXt.md">快速开始</a></td>
471
        </td>
C
Chen Long 已提交
472 473
    </tr>
    <tr>
Z
zhangyubo0722 已提交
474
        <td>50</td>
C
Chen Long 已提交
475
        <td>ResNeXt101_vd_32x4d</td>
Z
zhangyubo0722 已提交
476 477
        <td><a href="https://arxiv.org/abs/1611.05431">Aggregated Residual Transformations for Deep Neural Networks</a></td>
        <td><details><summary>Abstract</summary><div>We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online. </div></details></td>
C
Chen Long 已提交
478
        <td>ImageNet/Acc 0.7835</td>
Z
zhangyubo0722 已提交
479
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNeXt.md">快速开始</a></td>
480
        </td>
C
Chen Long 已提交
481 482
    </tr>
    <tr>
Z
zhangyubo0722 已提交
483
        <td>51</td>
C
Chen Long 已提交
484
        <td>ResNeXt101_vd_64x4d</td>
Z
zhangyubo0722 已提交
485 486
        <td><a href="https://arxiv.org/abs/1611.05431">Aggregated Residual Transformations for Deep Neural Networks</a></td>
        <td><details><summary>Abstract</summary><div>We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online. </div></details></td>
C
Chen Long 已提交
487
        <td>ImageNet/Acc 0.8078</td>
Z
zhangyubo0722 已提交
488
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNeXt.md">快速开始</a></td>
489
        </td>
C
Chen Long 已提交
490 491
    </tr>
    <tr>
Z
zhangyubo0722 已提交
492
        <td>52</td>
C
Chen Long 已提交
493
        <td>ResNeXt152_32x4d</td>
Z
zhangyubo0722 已提交
494 495
        <td><a href="https://arxiv.org/abs/1611.05431">Aggregated Residual Transformations for Deep Neural Networks</a></td>
        <td><details><summary>Abstract</summary><div>We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online. </div></details></td>
C
Chen Long 已提交
496
        <td>ImageNet/Acc 0.7898</td>
Z
zhangyubo0722 已提交
497
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNeXt.md">快速开始</a></td>
498
        </td>
C
Chen Long 已提交
499 500
    </tr>
    <tr>
Z
zhangyubo0722 已提交
501
        <td>53</td>
C
Chen Long 已提交
502
        <td>ResNeXt152_64x4d</td>
Z
zhangyubo0722 已提交
503 504
        <td><a href="https://arxiv.org/abs/1611.05431">Aggregated Residual Transformations for Deep Neural Networks</a></td>
        <td><details><summary>Abstract</summary><div>We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online. </div></details></td>
C
Chen Long 已提交
505
        <td>ImageNet/Acc 0.7951</td>
Z
zhangyubo0722 已提交
506
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNeXt.md">快速开始</a></td>
507
        </td>
C
Chen Long 已提交
508 509
    </tr>
    <tr>
Z
zhangyubo0722 已提交
510
        <td>54</td>
C
Chen Long 已提交
511
        <td>ResNeXt152_vd_32x4d</td>
Z
zhangyubo0722 已提交
512
        <td><a href="https://arxiv.org/abs/1611.05431">Aggregated Residual Transformations for Deep Neural Networks</a></td>
C
Chen Long 已提交
513 514
        <td><details><summary>Abstract</summary><div>We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online. </div></details></td>
        <td>ImageNet/Acc 0.8072</td>
Z
zhangyubo0722 已提交
515
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNeXt.md">快速开始</a></td>
516
        </td>
C
Chen Long 已提交
517 518
    </tr>
    <tr>
Z
zhangyubo0722 已提交
519
        <td>55</td>
C
Chen Long 已提交
520
        <td>ResNeXt152_vd_64x4d</td>
Z
zhangyubo0722 已提交
521
        <td><a href="https://arxiv.org/abs/1611.05431">Aggregated Residual Transformations for Deep Neural Networks</a></td>
C
Chen Long 已提交
522 523
        <td><details><summary>Abstract</summary><div>We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online. </div></details></td>
        <td>ImageNet/Acc 0.8108</td>
Z
zhangyubo0722 已提交
524
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNeXt.md">快速开始</a></td>
525
        </td>
C
Chen Long 已提交
526 527
    </tr>
    <tr>
Z
zhangyubo0722 已提交
528
        <td>56</td>
C
Chen Long 已提交
529
        <td>DenseNet121</td>
Z
zhangyubo0722 已提交
530 531
        <td><a href="https://arxiv.org/abs/1608.06993">Densely Connected Convolutional Networks</a></td>
        <td><details><summary>Abstract</summary><div>Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at this https URL. </div></details></td>
C
Chen Long 已提交
532
        <td>ImageNet/Acc 0.7566</td>
Z
zhangyubo0722 已提交
533
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DenseNet.md">快速开始</a></td>
534
        </td>
C
Chen Long 已提交
535 536
    </tr>
    <tr>
Z
zhangyubo0722 已提交
537
        <td>57</td>
C
Chen Long 已提交
538
        <td>DenseNet161</td>
Z
zhangyubo0722 已提交
539 540
        <td><a href="https://arxiv.org/abs/1608.06993">Densely Connected Convolutional Networks</a></td>
        <td><details><summary>Abstract</summary><div>Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at this https URL. </div></details></td>
C
Chen Long 已提交
541
        <td>ImageNet/Acc 0.7857</td>
Z
zhangyubo0722 已提交
542
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DenseNet.md">快速开始</a></td>
543
        </td>
C
Chen Long 已提交
544 545
    </tr>
    <tr>
Z
zhangyubo0722 已提交
546
        <td>58</td>
C
Chen Long 已提交
547
        <td>DenseNet169</td>
Z
zhangyubo0722 已提交
548 549
        <td><a href="https://arxiv.org/abs/1608.06993">Densely Connected Convolutional Networks</a></td>
        <td><details><summary>Abstract</summary><div>Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at this https URL. </div></details></td>
C
Chen Long 已提交
550
        <td>ImageNet/Acc 0.7681</td>
Z
zhangyubo0722 已提交
551
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DenseNet.md">快速开始</a></td>
552
        </td>
C
Chen Long 已提交
553 554
    </tr>
    <tr>
Z
zhangyubo0722 已提交
555
        <td>59</td>
C
Chen Long 已提交
556
        <td>DenseNet201</td>
Z
zhangyubo0722 已提交
557 558
        <td><a href="https://arxiv.org/abs/1608.06993">Densely Connected Convolutional Networks</a></td>
        <td><details><summary>Abstract</summary><div>Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at this https URL. </div></details></td>
C
Chen Long 已提交
559
        <td>ImageNet/Acc 0.7763</td>
Z
zhangyubo0722 已提交
560
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DenseNet.md">快速开始</a></td>
561
        </td>
C
Chen Long 已提交
562 563
    </tr>
    <tr>
Z
zhangyubo0722 已提交
564
        <td>60</td>
C
Chen Long 已提交
565
        <td>DenseNet264</td>
Z
zhangyubo0722 已提交
566 567
        <td><a href="https://arxiv.org/abs/1608.06993">Densely Connected Convolutional Networks</a></td>
        <td><details><summary>Abstract</summary><div>Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at this https URL. </div></details></td>
C
Chen Long 已提交
568
        <td>ImageNet/Acc 0.7796</td>
Z
zhangyubo0722 已提交
569
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DenseNet.md">快速开始</a></td>
570
        </td>
C
Chen Long 已提交
571 572
    </tr>
    <tr>
Z
zhangyubo0722 已提交
573
        <td>61</td>
C
Chen Long 已提交
574
        <td>DPN68</td>
Z
zhangyubo0722 已提交
575
        <td><a href="https://arxiv.org/abs/1707.01629">Dual Path Networks</a></td>
C
Chen Long 已提交
576 577
        <td><details><summary>Abstract</summary><div>In this work, we present a simple, highly efficient and modularized Dual Path Network (DPN) for image classification which presents a new topology of connection paths internally. By revealing the equivalence of the state-of-the-art Residual Network (ResNet) and Densely Convolutional Network (DenseNet) within the HORNN framework, we find that ResNet enables feature re-usage while DenseNet enables new features exploration which are both important for learning good representations. To enjoy the benefits from both path topologies, our proposed Dual Path Network shares common features while maintaining the flexibility to explore new features through dual path architectures. Extensive experiments on three benchmark datasets, ImagNet-1k, Places365 and PASCAL VOC, clearly demonstrate superior performance of the proposed DPN over state-of-the-arts. In particular, on the ImagNet-1k dataset, a shallow DPN surpasses the best ResNeXt-101(64x4d) with 26% smaller model size, 25% less computational cost and 8% lower memory consumption, and a deeper DPN (DPN-131) further pushes the state-of-the-art single model performance with about 2 times faster training speed. Experiments on the Places365 large-scale scene dataset, PASCAL VOC detection dataset, and PASCAL VOC segmentation dataset also demonstrate its consistently better performance than DenseNet, ResNet and the latest ResNeXt model over various applications. </div></details></td>
        <td>ImageNet/Acc 0.7678</td>
Z
zhangyubo0722 已提交
578
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DPN.md">快速开始</a></td>
579
        </td>
C
Chen Long 已提交
580 581
    </tr>
    <tr>
Z
zhangyubo0722 已提交
582
        <td>62</td>
C
Chen Long 已提交
583
        <td>DPN92</td>
Z
zhangyubo0722 已提交
584
        <td><a href="https://arxiv.org/abs/1707.01629">Dual Path Networks</a></td>
C
Chen Long 已提交
585 586
        <td><details><summary>Abstract</summary><div>In this work, we present a simple, highly efficient and modularized Dual Path Network (DPN) for image classification which presents a new topology of connection paths internally. By revealing the equivalence of the state-of-the-art Residual Network (ResNet) and Densely Convolutional Network (DenseNet) within the HORNN framework, we find that ResNet enables feature re-usage while DenseNet enables new features exploration which are both important for learning good representations. To enjoy the benefits from both path topologies, our proposed Dual Path Network shares common features while maintaining the flexibility to explore new features through dual path architectures. Extensive experiments on three benchmark datasets, ImagNet-1k, Places365 and PASCAL VOC, clearly demonstrate superior performance of the proposed DPN over state-of-the-arts. In particular, on the ImagNet-1k dataset, a shallow DPN surpasses the best ResNeXt-101(64x4d) with 26% smaller model size, 25% less computational cost and 8% lower memory consumption, and a deeper DPN (DPN-131) further pushes the state-of-the-art single model performance with about 2 times faster training speed. Experiments on the Places365 large-scale scene dataset, PASCAL VOC detection dataset, and PASCAL VOC segmentation dataset also demonstrate its consistently better performance than DenseNet, ResNet and the latest ResNeXt model over various applications. </div></details></td>
        <td>ImageNet/Acc 0.7985</td>
Z
zhangyubo0722 已提交
587
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DPN.md">快速开始</a></td>
588
        </td>
C
Chen Long 已提交
589 590
    </tr>
    <tr>
Z
zhangyubo0722 已提交
591
        <td>63</td>
C
Chen Long 已提交
592
        <td>DPN98</td>
Z
zhangyubo0722 已提交
593
        <td><a href="https://arxiv.org/abs/1707.01629">Dual Path Networks</a></td>
C
Chen Long 已提交
594 595
        <td><details><summary>Abstract</summary><div>In this work, we present a simple, highly efficient and modularized Dual Path Network (DPN) for image classification which presents a new topology of connection paths internally. By revealing the equivalence of the state-of-the-art Residual Network (ResNet) and Densely Convolutional Network (DenseNet) within the HORNN framework, we find that ResNet enables feature re-usage while DenseNet enables new features exploration which are both important for learning good representations. To enjoy the benefits from both path topologies, our proposed Dual Path Network shares common features while maintaining the flexibility to explore new features through dual path architectures. Extensive experiments on three benchmark datasets, ImagNet-1k, Places365 and PASCAL VOC, clearly demonstrate superior performance of the proposed DPN over state-of-the-arts. In particular, on the ImagNet-1k dataset, a shallow DPN surpasses the best ResNeXt-101(64x4d) with 26% smaller model size, 25% less computational cost and 8% lower memory consumption, and a deeper DPN (DPN-131) further pushes the state-of-the-art single model performance with about 2 times faster training speed. Experiments on the Places365 large-scale scene dataset, PASCAL VOC detection dataset, and PASCAL VOC segmentation dataset also demonstrate its consistently better performance than DenseNet, ResNet and the latest ResNeXt model over various applications. </div></details></td>
        <td>ImageNet/Acc 0.8059</td>
Z
zhangyubo0722 已提交
596
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DPN.md">快速开始</a></td>
597
        </td>
C
Chen Long 已提交
598 599
    </tr>
    <tr>
Z
zhangyubo0722 已提交
600
        <td>64</td>
C
Chen Long 已提交
601
        <td>DPN107</td>
Z
zhangyubo0722 已提交
602
        <td><a href="https://arxiv.org/abs/1707.01629">Dual Path Networks</a></td>
C
Chen Long 已提交
603 604
        <td><details><summary>Abstract</summary><div>In this work, we present a simple, highly efficient and modularized Dual Path Network (DPN) for image classification which presents a new topology of connection paths internally. By revealing the equivalence of the state-of-the-art Residual Network (ResNet) and Densely Convolutional Network (DenseNet) within the HORNN framework, we find that ResNet enables feature re-usage while DenseNet enables new features exploration which are both important for learning good representations. To enjoy the benefits from both path topologies, our proposed Dual Path Network shares common features while maintaining the flexibility to explore new features through dual path architectures. Extensive experiments on three benchmark datasets, ImagNet-1k, Places365 and PASCAL VOC, clearly demonstrate superior performance of the proposed DPN over state-of-the-arts. In particular, on the ImagNet-1k dataset, a shallow DPN surpasses the best ResNeXt-101(64x4d) with 26% smaller model size, 25% less computational cost and 8% lower memory consumption, and a deeper DPN (DPN-131) further pushes the state-of-the-art single model performance with about 2 times faster training speed. Experiments on the Places365 large-scale scene dataset, PASCAL VOC detection dataset, and PASCAL VOC segmentation dataset also demonstrate its consistently better performance than DenseNet, ResNet and the latest ResNeXt model over various applications. </div></details></td>
        <td>ImageNet/Acc 0.8089</td>
Z
zhangyubo0722 已提交
605
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DPN.md">快速开始</a></td>
606
        </td>
C
Chen Long 已提交
607 608
    </tr>
    <tr>
Z
zhangyubo0722 已提交
609
        <td>65</td>
C
Chen Long 已提交
610
        <td>DPN131</td>
Z
zhangyubo0722 已提交
611
        <td><a href="https://arxiv.org/abs/1707.01629">Dual Path Networks</a></td>
C
Chen Long 已提交
612 613
        <td><details><summary>Abstract</summary><div>In this work, we present a simple, highly efficient and modularized Dual Path Network (DPN) for image classification which presents a new topology of connection paths internally. By revealing the equivalence of the state-of-the-art Residual Network (ResNet) and Densely Convolutional Network (DenseNet) within the HORNN framework, we find that ResNet enables feature re-usage while DenseNet enables new features exploration which are both important for learning good representations. To enjoy the benefits from both path topologies, our proposed Dual Path Network shares common features while maintaining the flexibility to explore new features through dual path architectures. Extensive experiments on three benchmark datasets, ImagNet-1k, Places365 and PASCAL VOC, clearly demonstrate superior performance of the proposed DPN over state-of-the-arts. In particular, on the ImagNet-1k dataset, a shallow DPN surpasses the best ResNeXt-101(64x4d) with 26% smaller model size, 25% less computational cost and 8% lower memory consumption, and a deeper DPN (DPN-131) further pushes the state-of-the-art single model performance with about 2 times faster training speed. Experiments on the Places365 large-scale scene dataset, PASCAL VOC detection dataset, and PASCAL VOC segmentation dataset also demonstrate its consistently better performance than DenseNet, ResNet and the latest ResNeXt model over various applications. </div></details></td>
        <td>ImageNet/Acc 0.807</td>
Z
zhangyubo0722 已提交
614
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DPN.md">快速开始</a></td>
615
        </td>
C
Chen Long 已提交
616 617
    </tr>
    <tr>
Z
zhangyubo0722 已提交
618
        <td>66</td>
C
Chen Long 已提交
619
        <td>VGG11</td>
Z
zhangyubo0722 已提交
620
        <td><a href="https://arxiv.org/abs/1409.1556">Very Deep Convolutional Networks for Large-Scale Image Recognition</a></td>
C
Chen Long 已提交
621 622
        <td><details><summary>Abstract</summary><div>In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. </div></details></td>
        <td>ImageNet/Acc 0.693</td>
Z
zhangyubo0722 已提交
623
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/VGG.md">快速开始</a></td>
624
        </td>
C
Chen Long 已提交
625 626
    </tr>
    <tr>
Z
zhangyubo0722 已提交
627
        <td>67</td>
C
Chen Long 已提交
628
        <td>VGG13</td>
Z
zhangyubo0722 已提交
629
        <td><a href="https://arxiv.org/abs/1409.1556">Very Deep Convolutional Networks for Large-Scale Image Recognition</a></td>
C
Chen Long 已提交
630 631
        <td><details><summary>Abstract</summary><div>In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. </div></details></td>
        <td>ImageNet/Acc 0.7</td>
Z
zhangyubo0722 已提交
632
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/VGG.md">快速开始</a></td>
633
        </td>
C
Chen Long 已提交
634 635
    </tr>
    <tr>
Z
zhangyubo0722 已提交
636
        <td>68</td>
C
Chen Long 已提交
637
        <td>VGG16</td>
Z
zhangyubo0722 已提交
638
        <td><a href="https://arxiv.org/abs/1409.1556">Very Deep Convolutional Networks for Large-Scale Image Recognition</a></td>
C
Chen Long 已提交
639 640
        <td><details><summary>Abstract</summary><div>In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. </div></details></td>
        <td>ImageNet/Acc 0.72</td>
Z
zhangyubo0722 已提交
641
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/VGG.md">快速开始</a></td>
642
        </td>
C
Chen Long 已提交
643 644
    </tr>
    <tr>
Z
zhangyubo0722 已提交
645
        <td>69</td>
C
Chen Long 已提交
646
        <td>VGG19</td>
Z
zhangyubo0722 已提交
647
        <td><a href="https://arxiv.org/abs/1409.1556">Very Deep Convolutional Networks for Large-Scale Image Recognition</a></td>
C
Chen Long 已提交
648 649
        <td><details><summary>Abstract</summary><div>In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. </div></details></td>
        <td>ImageNet/Acc 0.726</td>
Z
zhangyubo0722 已提交
650
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/VGG.md">快速开始</a></td>
651
        </td>
C
Chen Long 已提交
652 653
    </tr>
    <tr>
Z
zhangyubo0722 已提交
654
        <td>70</td>
C
Chen Long 已提交
655
        <td>AlexNet</td>
Z
zhangyubo0722 已提交
656
        <td><a href="https://proceedings.neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf">ImageNet Classification with Deep Convolutional Neural Networks</a></td>
C
Chen Long 已提交
657 658
        <td><details><summary>Abstract</summary><div>We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called “dropout” that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry</div></details></td>
        <td>ImageNet/Acc 0.567</td>
Z
zhangyubo0722 已提交
659
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Others.md">快速开始</a></td>
660
        </td>
C
Chen Long 已提交
661 662
    </tr>
    <tr>
Z
zhangyubo0722 已提交
663
        <td>71</td>
C
Chen Long 已提交
664
        <td>Xception41</td>
Z
zhangyubo0722 已提交
665
        <td><a href="https://arxiv.org/abs/1610.02357">Xception: Deep Learning with Depthwise Separable Convolutions</a></td>
C
Chen Long 已提交
666 667
        <td><details><summary>Abstract</summary><div>We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters. </div></details></td>
        <td>ImageNet/Acc 0.793</td>
Z
zhangyubo0722 已提交
668
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Inception.md">快速开始</a></td>
669
        </td>
C
Chen Long 已提交
670 671
    </tr>
    <tr>
Z
zhangyubo0722 已提交
672
        <td>72</td>
C
Chen Long 已提交
673
        <td>Xception65</td>
Z
zhangyubo0722 已提交
674
        <td><a href="https://arxiv.org/abs/1610.02357">Xception: Deep Learning with Depthwise Separable Convolutions</a></td>
C
Chen Long 已提交
675 676
        <td><details><summary>Abstract</summary><div>We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters. </div></details></td>
        <td>ImageNet/Acc 0.81</td>
Z
zhangyubo0722 已提交
677
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Inception.md">快速开始</a></td>
678
        </td>
C
Chen Long 已提交
679 680
    </tr>
    <tr>
Z
zhangyubo0722 已提交
681
        <td>73</td>
C
Chen Long 已提交
682
        <td>Xception71</td>
Z
zhangyubo0722 已提交
683
        <td><a href="https://arxiv.org/abs/1610.02357">Xception: Deep Learning with Depthwise Separable Convolutions</a></td>
C
Chen Long 已提交
684 685
        <td><details><summary>Abstract</summary><div>We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters. </div></details></td>
        <td>ImageNet/Acc 0.8111</td>
Z
zhangyubo0722 已提交
686
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Inception.md">快速开始</a></td>
687
        </td>
C
Chen Long 已提交
688 689
    </tr>
    <tr>
Z
zhangyubo0722 已提交
690
        <td>74</td>
C
Chen Long 已提交
691
        <td>Xception41_deeplab</td>
Z
zhangyubo0722 已提交
692
        <td><a href="https://arxiv.org/abs/1610.02357">Xception: Deep Learning with Depthwise Separable Convolutions</a></td>
C
Chen Long 已提交
693 694
        <td><details><summary>Abstract</summary><div>We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters. </div></details></td>
        <td>ImageNet/Acc 0.7955</td>
Z
zhangyubo0722 已提交
695
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Inception.md">快速开始</a></td>
696
        </td>
C
Chen Long 已提交
697 698
    </tr>
    <tr>
Z
zhangyubo0722 已提交
699
        <td>75</td>
C
Chen Long 已提交
700
        <td>Xception65_deeplab</td>
Z
zhangyubo0722 已提交
701
        <td><a href="https://arxiv.org/abs/1610.02357">Xception: Deep Learning with Depthwise Separable Convolutions</a></td>
C
Chen Long 已提交
702 703
        <td><details><summary>Abstract</summary><div>We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters. </div></details></td>
        <td>ImageNet/Acc 0.8032</td>
Z
zhangyubo0722 已提交
704
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Inception.md">快速开始</a></td>
705
        </td>
C
Chen Long 已提交
706 707
    </tr>
    <tr>
Z
zhangyubo0722 已提交
708
        <td>76</td>
C
Chen Long 已提交
709
        <td>DarkNet53</td>
Z
zhangyubo0722 已提交
710
        <td><a href="https://arxiv.org/pdf/1804.02767.pdf">YOLOv3: An Incremental Improvement</a></td>
C
Chen Long 已提交
711 712
        <td><details><summary>Abstract</summary><div>We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL</div></details></td>
        <td>ImageNet/Acc 0.78</td>
Z
zhangyubo0722 已提交
713
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Others.md">快速开始</a></td>
714
        </td>
C
Chen Long 已提交
715 716
    </tr>
    <tr>
Z
zhangyubo0722 已提交
717
        <td>77</td>
C
Chen Long 已提交
718
        <td>EfficientNetB0</td>
Z
zhangyubo0722 已提交
719
        <td><a href="https://arxiv.org/abs/1905.11946">EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks</a></td>
C
Chen Long 已提交
720 721
        <td><details><summary>Abstract</summary><div>Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet.To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.7738</td>
Z
zhangyubo0722 已提交
722
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/EfficientNet.md">快速开始</a></td>
723
        </td>
C
Chen Long 已提交
724 725
    </tr>
    <tr>
Z
zhangyubo0722 已提交
726
        <td>78</td>
C
Chen Long 已提交
727
        <td>EfficientNetB1</td>
Z
zhangyubo0722 已提交
728
        <td><a href="https://arxiv.org/abs/1905.11946">EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks</a></td>
C
Chen Long 已提交
729 730
        <td><details><summary>Abstract</summary><div>Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet.To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.7915</td>
Z
zhangyubo0722 已提交
731
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/EfficientNet.md">快速开始</a></td>
732
        </td>
C
Chen Long 已提交
733 734
    </tr>
    <tr>
Z
zhangyubo0722 已提交
735
        <td>79</td>
C
Chen Long 已提交
736
        <td>EfficientNetB2</td>
Z
zhangyubo0722 已提交
737
        <td><a href="https://arxiv.org/abs/1905.11946">EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks</a></td>
C
Chen Long 已提交
738 739
        <td><details><summary>Abstract</summary><div>Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet.To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.7985</td>
Z
zhangyubo0722 已提交
740
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/EfficientNet.md">快速开始</a></td>
741
        </td>
C
Chen Long 已提交
742 743
    </tr>
    <tr>
Z
zhangyubo0722 已提交
744
        <td>80</td>
C
Chen Long 已提交
745
        <td>EfficientNetB3</td>
Z
zhangyubo0722 已提交
746
        <td><a href="https://arxiv.org/abs/1905.11946">EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks</a></td>
C
Chen Long 已提交
747 748
        <td><details><summary>Abstract</summary><div>Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet.To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.8115</td>
Z
zhangyubo0722 已提交
749
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/EfficientNet.md">快速开始</a></td>
750
        </td>
C
Chen Long 已提交
751 752
    </tr>
    <tr>
Z
zhangyubo0722 已提交
753
        <td>81</td>
C
Chen Long 已提交
754
        <td>EfficientNetB4</td>
Z
zhangyubo0722 已提交
755
        <td><a href="https://arxiv.org/abs/1905.11946">EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks</a></td>
C
Chen Long 已提交
756 757
        <td><details><summary>Abstract</summary><div>Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet.To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.8285</td>
Z
zhangyubo0722 已提交
758
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/EfficientNet.md">快速开始</a></td>
759
        </td>
C
Chen Long 已提交
760 761
    </tr>
    <tr>
Z
zhangyubo0722 已提交
762
        <td>82</td>
C
Chen Long 已提交
763
        <td>EfficientNetB5</td>
Z
zhangyubo0722 已提交
764
        <td><a href="https://arxiv.org/abs/1905.11946">EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks</a></td>
C
Chen Long 已提交
765 766
        <td><details><summary>Abstract</summary><div>Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet.To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.8362</td>
Z
zhangyubo0722 已提交
767
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/EfficientNet.md">快速开始</a></td>
768
        </td>
C
Chen Long 已提交
769 770
    </tr>
    <tr>
Z
zhangyubo0722 已提交
771
        <td>83</td>
C
Chen Long 已提交
772
        <td>EfficientNetB6</td>
Z
zhangyubo0722 已提交
773
        <td><a href="https://arxiv.org/abs/1905.11946">EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks</a></td>
C
Chen Long 已提交
774 775
        <td><details><summary>Abstract</summary><div>Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet.To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.84</td>
Z
zhangyubo0722 已提交
776
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/EfficientNet.md">快速开始</a></td>
777
        </td>
C
Chen Long 已提交
778 779
    </tr>
    <tr>
Z
zhangyubo0722 已提交
780
        <td>84</td>
C
Chen Long 已提交
781
        <td>EfficientNetB7</td>
Z
zhangyubo0722 已提交
782
        <td><a href="https://arxiv.org/abs/1905.11946">EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks</a></td>
C
Chen Long 已提交
783 784
        <td><details><summary>Abstract</summary><div>Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet.To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.843</td>
Z
zhangyubo0722 已提交
785
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/EfficientNet.md">快速开始</a></td>
786
        </td>
C
Chen Long 已提交
787 788
    </tr>
    <tr>
Z
zhangyubo0722 已提交
789
        <td>85</td>
C
Chen Long 已提交
790
        <td>SqueezeNet1_0</td>
Z
zhangyubo0722 已提交
791
        <td><a href="https://arxiv.org/abs/1602.07360">SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and </a></td>
C
Chen Long 已提交
792 793
        <td><details><summary>Abstract</summary><div>Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).The SqueezeNet architecture is available for download here: this https URL</div></details></td>
        <td>ImageNet/Acc 0.596</td>
Z
zhangyubo0722 已提交
794
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Others.md">快速开始</a></td>
795
        </td>
C
Chen Long 已提交
796 797
    </tr>
    <tr>
Z
zhangyubo0722 已提交
798
        <td>86</td>
C
Chen Long 已提交
799
        <td>SqueezeNet1_1</td>
Z
zhangyubo0722 已提交
800
        <td><a href="https://arxiv.org/abs/1602.07360">SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and </a></td>
C
Chen Long 已提交
801 802
        <td><details><summary>Abstract</summary><div>Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).The SqueezeNet architecture is available for download here: this https URL</div></details></td>
        <td>ImageNet/Acc 0.601</td>
Z
zhangyubo0722 已提交
803
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Others.md">快速开始</a></td>
804
        </td>
C
Chen Long 已提交
805 806
    </tr>
    <tr>
Z
zhangyubo0722 已提交
807
        <td>87</td>
C
Chen Long 已提交
808
        <td>MobileNetV1</td>
Z
zhangyubo0722 已提交
809
        <td><a href="https://arxiv.org/abs/1704.04861">MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications</a></td>
C
Chen Long 已提交
810 811
        <td><details><summary>Abstract</summary><div> We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization. </div></details></td>
        <td>ImageNet/Acc 0.7099</td>
Z
zhangyubo0722 已提交
812
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV1.md">快速开始</a></td>
813
        </td>
C
Chen Long 已提交
814 815
    </tr>
    <tr>
Z
zhangyubo0722 已提交
816
        <td>88</td>
C
Chen Long 已提交
817
        <td>MobileNetV1_x0_25</td>
Z
zhangyubo0722 已提交
818
        <td><a href="https://arxiv.org/abs/1704.04861">MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications</a></td>
C
Chen Long 已提交
819 820
        <td><details><summary>Abstract</summary><div> We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization. </div></details></td>
        <td>ImageNet/Acc 0.5143</td>
Z
zhangyubo0722 已提交
821
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV1.md">快速开始</a></td>
822
        </td>
C
Chen Long 已提交
823 824
    </tr>
    <tr>
Z
zhangyubo0722 已提交
825
        <td>89</td>
C
Chen Long 已提交
826
        <td>MobileNetV1_x0_5</td>
Z
zhangyubo0722 已提交
827
        <td><a href="https://arxiv.org/abs/1704.04861">MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications</a></td>
C
Chen Long 已提交
828 829
        <td><details><summary>Abstract</summary><div> We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization. </div></details></td>
        <td>ImageNet/Acc 0.6352</td>
Z
zhangyubo0722 已提交
830
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV1.md">快速开始</a></td>
831
        </td>
C
Chen Long 已提交
832 833
    </tr>
    <tr>
Z
zhangyubo0722 已提交
834
        <td>90</td>
C
Chen Long 已提交
835
        <td>MobileNetV1_x0_75</td>
Z
zhangyubo0722 已提交
836
        <td><a href="https://arxiv.org/abs/1704.04861">MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications</a></td>
C
Chen Long 已提交
837 838
        <td><details><summary>Abstract</summary><div> We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization. </div></details></td>
        <td>ImageNet/Acc 0.6881</td>
Z
zhangyubo0722 已提交
839
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV1.md">快速开始</a></td>
840
        </td>
C
Chen Long 已提交
841 842
    </tr>
    <tr>
Z
zhangyubo0722 已提交
843
        <td>91</td>
C
Chen Long 已提交
844
        <td>MobileNetV2</td>
Z
zhangyubo0722 已提交
845
        <td><a href="https://arxiv.org/abs/1801.04381">MobileNetV2: Inverted Residuals and Linear Bottlenecks</a></td>
C
Chen Long 已提交
846 847
        <td><details><summary>Abstract</summary><div>In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3.The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters </div></details></td>
        <td>ImageNet/Acc 0.7215</td>
Z
zhangyubo0722 已提交
848
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV2.md">快速开始</a></td>
849
        </td>
C
Chen Long 已提交
850 851
    </tr>
    <tr>
Z
zhangyubo0722 已提交
852
        <td>92</td>
C
Chen Long 已提交
853
        <td>MobileNetV2_x0_25</td>
Z
zhangyubo0722 已提交
854
        <td><a href="https://arxiv.org/abs/1801.04381">MobileNetV2: Inverted Residuals and Linear Bottlenecks</a></td>
C
Chen Long 已提交
855 856
        <td><details><summary>Abstract</summary><div>In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3.The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters </div></details></td>
        <td>ImageNet/Acc 0.5321</td>
Z
zhangyubo0722 已提交
857
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV2.md">快速开始</a></td>
858
        </td>
C
Chen Long 已提交
859 860
    </tr>
    <tr>
Z
zhangyubo0722 已提交
861
        <td>93</td>
C
Chen Long 已提交
862
        <td>MobileNetV2_x0_5</td>
Z
zhangyubo0722 已提交
863
        <td><a href="https://arxiv.org/abs/1801.04381">MobileNetV2: Inverted Residuals and Linear Bottlenecks</a></td>
C
Chen Long 已提交
864 865
        <td><details><summary>Abstract</summary><div>In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3.The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters </div></details></td>
        <td>ImageNet/Acc 0.6503</td>
Z
zhangyubo0722 已提交
866
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV2.md">快速开始</a></td>
867
        </td>
C
Chen Long 已提交
868 869
    </tr>
    <tr>
Z
zhangyubo0722 已提交
870
        <td>94</td>
C
Chen Long 已提交
871
        <td>MobileNetV2_x0_75</td>
Z
zhangyubo0722 已提交
872
        <td><a href="https://arxiv.org/abs/1801.04381">MobileNetV2: Inverted Residuals and Linear Bottlenecks</a></td>
C
Chen Long 已提交
873 874
        <td><details><summary>Abstract</summary><div>In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3.The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters </div></details></td>
        <td>ImageNet/Acc 0.6983</td>
Z
zhangyubo0722 已提交
875
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV2.md">快速开始</a></td>
876
        </td>
C
Chen Long 已提交
877 878
    </tr>
    <tr>
Z
zhangyubo0722 已提交
879
        <td>95</td>
C
Chen Long 已提交
880
        <td>MobileNetV2_x1_5</td>
Z
zhangyubo0722 已提交
881
        <td><a href="https://arxiv.org/abs/1801.04381">MobileNetV2: Inverted Residuals and Linear Bottlenecks</a></td>
C
Chen Long 已提交
882 883
        <td><details><summary>Abstract</summary><div>In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3.The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters </div></details></td>
        <td>ImageNet/Acc 0.7412</td>
Z
zhangyubo0722 已提交
884
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV2.md">快速开始</a></td>
885
        </td>
C
Chen Long 已提交
886 887
    </tr>
    <tr>
Z
zhangyubo0722 已提交
888
        <td>96</td>
C
Chen Long 已提交
889
        <td>MobileNetV2_x2_0</td>
Z
zhangyubo0722 已提交
890
        <td><a href="https://arxiv.org/abs/1801.04381">MobileNetV2: Inverted Residuals and Linear Bottlenecks</a></td>
C
Chen Long 已提交
891 892
        <td><details><summary>Abstract</summary><div>In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3.The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters </div></details></td>
        <td>ImageNet/Acc 0.7523</td>
Z
zhangyubo0722 已提交
893
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV2.md">快速开始</a></td>
894
        </td>
C
Chen Long 已提交
895 896
    </tr>
    <tr>
Z
zhangyubo0722 已提交
897
        <td>97</td>
C
Chen Long 已提交
898
        <td>MobileNetV3_large_x0_</br>35</td>
Z
zhangyubo0722 已提交
899
        <td><a href="https://arxiv.org/abs/1905.02244">Searching for MobileNetV3</a></td>
C
Chen Long 已提交
900 901
        <td><details><summary>Abstract</summary><div>We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2\% more accurate on ImageNet classification while reducing latency by 15\% compared to MobileNetV2. MobileNetV3-Small is 4.6\% more accurate while reducing latency by 5\% compared to MobileNetV2. MobileNetV3-Large detection is 25\% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation. </div></details></td>
        <td>ImageNet/Acc 0.6432</td>
Z
zhangyubo0722 已提交
902
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV3.md">快速开始</a></td>
903
        </td>
C
Chen Long 已提交
904 905
    </tr>
    <tr>
Z
zhangyubo0722 已提交
906
        <td>98</td>
C
Chen Long 已提交
907
        <td>MobileNetV3_large_x0_</br>5</td>
Z
zhangyubo0722 已提交
908
        <td><a href="https://arxiv.org/abs/1905.02244">Searching for MobileNetV3</a></td>
C
Chen Long 已提交
909 910
        <td><details><summary>Abstract</summary><div>We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2\% more accurate on ImageNet classification while reducing latency by 15\% compared to MobileNetV2. MobileNetV3-Small is 4.6\% more accurate while reducing latency by 5\% compared to MobileNetV2. MobileNetV3-Large detection is 25\% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation. </div></details></td>
        <td>ImageNet/Acc 0.6924</td>
Z
zhangyubo0722 已提交
911
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV3.md">快速开始</a></td>
912
        </td>
C
Chen Long 已提交
913 914
    </tr>
    <tr>
Z
zhangyubo0722 已提交
915
        <td>99</td>
C
Chen Long 已提交
916
        <td>MobileNetV3_large_x0_</br>75</td>
Z
zhangyubo0722 已提交
917
        <td><a href="https://arxiv.org/abs/1905.02244">Searching for MobileNetV3</a></td>
C
Chen Long 已提交
918 919
        <td><details><summary>Abstract</summary><div>We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2\% more accurate on ImageNet classification while reducing latency by 15\% compared to MobileNetV2. MobileNetV3-Small is 4.6\% more accurate while reducing latency by 5\% compared to MobileNetV2. MobileNetV3-Large detection is 25\% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation. </div></details></td>
        <td>ImageNet/Acc 0.7314</td>
Z
zhangyubo0722 已提交
920
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV3.md">快速开始</a></td>
921
        </td>
C
Chen Long 已提交
922 923
    </tr>
    <tr>
Z
zhangyubo0722 已提交
924
        <td>100</td>
C
Chen Long 已提交
925
        <td>MobileNetV3_large_x1_</br>0</td>
Z
zhangyubo0722 已提交
926
        <td><a href="https://arxiv.org/abs/1905.02244">Searching for MobileNetV3</a></td>
C
Chen Long 已提交
927 928
        <td><details><summary>Abstract</summary><div>We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2\% more accurate on ImageNet classification while reducing latency by 15\% compared to MobileNetV2. MobileNetV3-Small is 4.6\% more accurate while reducing latency by 5\% compared to MobileNetV2. MobileNetV3-Large detection is 25\% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation. </div></details></td>
        <td>ImageNet/Acc 0.7532</td>
Z
zhangyubo0722 已提交
929
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV3.md">快速开始</a></td>
930
        </td>
C
Chen Long 已提交
931 932
    </tr>
    <tr>
Z
zhangyubo0722 已提交
933
        <td>101</td>
C
Chen Long 已提交
934
        <td>MobileNetV3_large_x1_</br>0-FPGM</td>
Z
zhangyubo0722 已提交
935
        <td><a href="https://arxiv.org/abs/1905.02244">Searching for MobileNetV3</a></td>
C
Chen Long 已提交
936 937
        <td><details><summary>Abstract</summary><div>We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2\% more accurate on ImageNet classification while reducing latency by 15\% compared to MobileNetV2. MobileNetV3-Small is 4.6\% more accurate while reducing latency by 5\% compared to MobileNetV2. MobileNetV3-Large detection is 25\% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation. </div></details></td>
        <td>-</td>
Z
zhangyubo0722 已提交
938
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV3.md">快速开始</a></td>
939
        </td>
C
Chen Long 已提交
940 941
    </tr>
    <tr>
Z
zhangyubo0722 已提交
942
        <td>102</td>
943
        <td>MobileNetV3_large_x1_</br>0_PACT</td>
Z
zhangyubo0722 已提交
944
        <td><a href="https://arxiv.org/abs/1905.02244">Searching for MobileNetV3</a></td>
C
Chen Long 已提交
945 946
        <td><details><summary>Abstract</summary><div>We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2\% more accurate on ImageNet classification while reducing latency by 15\% compared to MobileNetV2. MobileNetV3-Small is 4.6\% more accurate while reducing latency by 5\% compared to MobileNetV2. MobileNetV3-Large detection is 25\% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation. </div></details></td>
        <td>-</td>
Z
zhangyubo0722 已提交
947
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV3.md">快速开始</a></td>
948
        </td>
C
Chen Long 已提交
949 950
    </tr>
    <tr>
Z
zhangyubo0722 已提交
951
        <td>103</td>
952
        <td>MobileNetV3_large_x1_</br>0_KL</td>
Z
zhangyubo0722 已提交
953
        <td><a href="https://arxiv.org/abs/1905.02244">Searching for MobileNetV3</a></td>
C
Chen Long 已提交
954 955
        <td><details><summary>Abstract</summary><div>We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2\% more accurate on ImageNet classification while reducing latency by 15\% compared to MobileNetV2. MobileNetV3-Small is 4.6\% more accurate while reducing latency by 5\% compared to MobileNetV2. MobileNetV3-Large detection is 25\% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation. </div></details></td>
        <td>-</td>
Z
zhangyubo0722 已提交
956
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV3.md">快速开始</a></td>
957
        </td>
C
Chen Long 已提交
958 959
    </tr>
    <tr>
Z
zhangyubo0722 已提交
960
        <td>104</td>
C
Chen Long 已提交
961
        <td>MobileNetV3_large_x1_</br>25</td>
Z
zhangyubo0722 已提交
962
        <td><a href="https://arxiv.org/abs/1905.02244">Searching for MobileNetV3</a></td>
C
Chen Long 已提交
963 964
        <td><details><summary>Abstract</summary><div>We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2\% more accurate on ImageNet classification while reducing latency by 15\% compared to MobileNetV2. MobileNetV3-Small is 4.6\% more accurate while reducing latency by 5\% compared to MobileNetV2. MobileNetV3-Large detection is 25\% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation. </div></details></td>
        <td>ImageNet/Acc 0.7067</td>
Z
zhangyubo0722 已提交
965
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV3.md">快速开始</a></td>
966
        </td>
C
Chen Long 已提交
967 968
    </tr>
    <tr>
Z
zhangyubo0722 已提交
969
        <td>105</td>
C
Chen Long 已提交
970
        <td>MobileNetV3_small_x0_</br>35</td>
Z
zhangyubo0722 已提交
971
        <td><a href="https://arxiv.org/abs/1905.02244">Searching for MobileNetV3</a></td>
C
Chen Long 已提交
972 973
        <td><details><summary>Abstract</summary><div>We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2\% more accurate on ImageNet classification while reducing latency by 15\% compared to MobileNetV2. MobileNetV3-Small is 4.6\% more accurate while reducing latency by 5\% compared to MobileNetV2. MobileNetV3-Large detection is 25\% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation. </div></details></td>
        <td>ImageNet/Acc 0.5303</td>
Z
zhangyubo0722 已提交
974
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV3.md">快速开始</a></td>
975
        </td>
C
Chen Long 已提交
976 977
    </tr>
    <tr>
Z
zhangyubo0722 已提交
978
        <td>106</td>
C
Chen Long 已提交
979
        <td>MobileNetV3_small_x0_</br>5</td>
Z
zhangyubo0722 已提交
980
        <td><a href="https://arxiv.org/abs/1905.02244">Searching for MobileNetV3</a></td>
C
Chen Long 已提交
981 982
        <td><details><summary>Abstract</summary><div>We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2\% more accurate on ImageNet classification while reducing latency by 15\% compared to MobileNetV2. MobileNetV3-Small is 4.6\% more accurate while reducing latency by 5\% compared to MobileNetV2. MobileNetV3-Large detection is 25\% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation. </div></details></td>
        <td>ImageNet/Acc 0.5921</td>
Z
zhangyubo0722 已提交
983
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV3.md">快速开始</a></td>
984
        </td>
C
Chen Long 已提交
985 986
    </tr>
    <tr>
Z
zhangyubo0722 已提交
987
        <td>107</td>
C
Chen Long 已提交
988
        <td>MobileNetV3_small_x0_</br>75</td>
Z
zhangyubo0722 已提交
989
        <td><a href="https://arxiv.org/abs/1905.02244">Searching for MobileNetV3</a></td>
C
Chen Long 已提交
990 991
        <td><details><summary>Abstract</summary><div>We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2\% more accurate on ImageNet classification while reducing latency by 15\% compared to MobileNetV2. MobileNetV3-Small is 4.6\% more accurate while reducing latency by 5\% compared to MobileNetV2. MobileNetV3-Large detection is 25\% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation. </div></details></td>
        <td>ImageNet/Acc 0.6602</td>
Z
zhangyubo0722 已提交
992
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV3.md">快速开始</a></td>
993
        </td>
C
Chen Long 已提交
994 995
    </tr>
    <tr>
Z
zhangyubo0722 已提交
996
        <td>108</td>
C
Chen Long 已提交
997
        <td>MobileNetV3_small_x1_</br>0</td>
Z
zhangyubo0722 已提交
998
        <td><a href="https://arxiv.org/abs/1905.02244">Searching for MobileNetV3</a></td>
C
Chen Long 已提交
999 1000
        <td><details><summary>Abstract</summary><div>We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2\% more accurate on ImageNet classification while reducing latency by 15\% compared to MobileNetV2. MobileNetV3-Small is 4.6\% more accurate while reducing latency by 5\% compared to MobileNetV2. MobileNetV3-Large detection is 25\% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation. </div></details></td>
        <td>ImageNet/Acc 0.6824</td>
Z
zhangyubo0722 已提交
1001
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV3.md">快速开始</a></td>
1002
        </td>
C
Chen Long 已提交
1003 1004
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1005
        <td>109</td>
C
Chen Long 已提交
1006
        <td>MobileNetV3_small_x1_</br>25</td>
Z
zhangyubo0722 已提交
1007
        <td><a href="https://arxiv.org/abs/1905.02244">Searching for MobileNetV3</a></td>
C
Chen Long 已提交
1008 1009
        <td><details><summary>Abstract</summary><div>We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2\% more accurate on ImageNet classification while reducing latency by 15\% compared to MobileNetV2. MobileNetV3-Small is 4.6\% more accurate while reducing latency by 5\% compared to MobileNetV2. MobileNetV3-Large detection is 25\% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation. </div></details></td>
        <td>ImageNet/Acc 0.7067</td>
Z
zhangyubo0722 已提交
1010
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileNetV3.md">快速开始</a></td>
1011
        </td>
C
Chen Long 已提交
1012 1013
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1014
        <td>110</td>
C
Chen Long 已提交
1015
        <td>ShuffleNetV2_swish</td>
Z
zhangyubo0722 已提交
1016
        <td><a href="https://arxiv.org/abs/1807.11164">ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design</a></td>
C
Chen Long 已提交
1017 1018
        <td><details><summary>Abstract</summary><div>Currently, the neural network architecture design is mostly guided by the \emph{indirect} metric of computation complexity, i.e., FLOPs. However, the \emph{direct} metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical \emph{guidelines} for efficient network design. Accordingly, a new architecture is presented, called \emph{ShuffleNet V2}. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff. </div></details></td>
        <td>ImageNet/Acc 0.7003</td>
Z
zhangyubo0722 已提交
1019
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ShuffleNetV2.md">快速开始</a></td>
1020
        </td>
C
Chen Long 已提交
1021 1022
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1023
        <td>111</td>
C
Chen Long 已提交
1024
        <td>ShuffleNetV2_x0_25</td>
Z
zhangyubo0722 已提交
1025
        <td><a href="https://arxiv.org/abs/1807.11164">ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design</a></td>
C
Chen Long 已提交
1026 1027
        <td><details><summary>Abstract</summary><div>Currently, the neural network architecture design is mostly guided by the \emph{indirect} metric of computation complexity, i.e., FLOPs. However, the \emph{direct} metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical \emph{guidelines} for efficient network design. Accordingly, a new architecture is presented, called \emph{ShuffleNet V2}. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff. </div></details></td>
        <td>ImageNet/Acc 0.499</td>
Z
zhangyubo0722 已提交
1028
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ShuffleNetV2.md">快速开始</a></td>
1029
        </td>
C
Chen Long 已提交
1030 1031
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1032
        <td>112</td>
C
Chen Long 已提交
1033
        <td>ShuffleNetV2_x0_33</td>
Z
zhangyubo0722 已提交
1034
        <td><a href="https://arxiv.org/abs/1807.11164">ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design</a></td>
C
Chen Long 已提交
1035 1036
        <td><details><summary>Abstract</summary><div>Currently, the neural network architecture design is mostly guided by the \emph{indirect} metric of computation complexity, i.e., FLOPs. However, the \emph{direct} metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical \emph{guidelines} for efficient network design. Accordingly, a new architecture is presented, called \emph{ShuffleNet V2}. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff. </div></details></td>
        <td>ImageNet/Acc 0.5373</td>
Z
zhangyubo0722 已提交
1037
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ShuffleNetV2.md">快速开始</a></td>
1038
        </td>
C
Chen Long 已提交
1039 1040
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1041
        <td>113</td>
C
Chen Long 已提交
1042
        <td>ShuffleNetV2_x0_5</td>
Z
zhangyubo0722 已提交
1043
        <td><a href="https://arxiv.org/abs/1807.11164">ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design</a></td>
C
Chen Long 已提交
1044 1045
        <td><details><summary>Abstract</summary><div>Currently, the neural network architecture design is mostly guided by the \emph{indirect} metric of computation complexity, i.e., FLOPs. However, the \emph{direct} metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical \emph{guidelines} for efficient network design. Accordingly, a new architecture is presented, called \emph{ShuffleNet V2}. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff. </div></details></td>
        <td>ImageNet/Acc 0.6032</td>
Z
zhangyubo0722 已提交
1046
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ShuffleNetV2.md">快速开始</a></td>
1047
        </td>
C
Chen Long 已提交
1048 1049
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1050
        <td>114</td>
C
Chen Long 已提交
1051
        <td>ShuffleNetV2_x1_0</td>
Z
zhangyubo0722 已提交
1052
        <td><a href="https://arxiv.org/abs/1807.11164">ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design</a></td>
C
Chen Long 已提交
1053 1054
        <td><details><summary>Abstract</summary><div>Currently, the neural network architecture design is mostly guided by the \emph{indirect} metric of computation complexity, i.e., FLOPs. However, the \emph{direct} metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical \emph{guidelines} for efficient network design. Accordingly, a new architecture is presented, called \emph{ShuffleNet V2}. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff. </div></details></td>
        <td>ImageNet/Acc 0.688</td>
Z
zhangyubo0722 已提交
1055
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ShuffleNetV2.md">快速开始</a></td>
1056
        </td>
C
Chen Long 已提交
1057 1058
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1059
        <td>115</td>
C
Chen Long 已提交
1060
        <td>ShuffleNetV2_x1_5</td>
Z
zhangyubo0722 已提交
1061
        <td><a href="https://arxiv.org/abs/1807.11164">ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design</a></td>
C
Chen Long 已提交
1062 1063
        <td><details><summary>Abstract</summary><div>Currently, the neural network architecture design is mostly guided by the \emph{indirect} metric of computation complexity, i.e., FLOPs. However, the \emph{direct} metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical \emph{guidelines} for efficient network design. Accordingly, a new architecture is presented, called \emph{ShuffleNet V2}. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff. </div></details></td>
        <td>ImageNet/Acc 0.7163</td>
Z
zhangyubo0722 已提交
1064
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ShuffleNetV2.md">快速开始</a></td>
1065
        </td>
C
Chen Long 已提交
1066 1067
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1068
        <td>116</td>
C
Chen Long 已提交
1069
        <td>ShuffleNetV2_x2_0</td>
Z
zhangyubo0722 已提交
1070
        <td><a href="https://arxiv.org/abs/1807.11164">ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design</a></td>
C
Chen Long 已提交
1071 1072
        <td><details><summary>Abstract</summary><div>Currently, the neural network architecture design is mostly guided by the \emph{indirect} metric of computation complexity, i.e., FLOPs. However, the \emph{direct} metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical \emph{guidelines} for efficient network design. Accordingly, a new architecture is presented, called \emph{ShuffleNet V2}. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff. </div></details></td>
        <td>ImageNet/Acc 0.7315</td>
Z
zhangyubo0722 已提交
1073
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ShuffleNetV2.md">快速开始</a></td>
1074
        </td>
C
Chen Long 已提交
1075 1076
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1077
        <td>117</td>
C
Chen Long 已提交
1078
        <td>CSPDarkNet53</td>
Z
zhangyubo0722 已提交
1079
        <td><a href="https://arxiv.org/abs/1911.11929">CSPNet: A New Backbone that can Enhance Learning Capability of CNN</a></td>
C
Chen Long 已提交
1080
        <td><details><summary>Abstract</summary><div>Neural networks have enabled state-of-the-art approaches to achieve incredible results on computer vision tasks such as object detection. However, such success greatly relies on costly computation resources, which hinders people with cheap devices from appreciating the advanced technology. In this paper, we propose Cross Stage Partial Network (CSPNet) to mitigate the problem that previous works require heavy inference computations from the network architecture perspective. We attribute the problem to the duplicate gradient information within network optimization. The proposed networks respect the variability of the gradients by integrating feature maps from the beginning and the end of a network stage, which, in our experiments, reduces computations by 20% with equivalent or even superior accuracy on the ImageNet dataset, and significantly outperforms state-of-the-art approaches in terms of AP50 on the MS COCO object detection dataset. The CSPNet is easy to implement and general enough to cope with architectures based on ResNet, ResNeXt, and DenseNet. Source code is at this https URL. </div></details></td>
Z
zhangyubo0722 已提交
1081 1082
        <td>ImageNet/Acc 0.7725</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/CSPNet.md">快速开始</a></td>
1083
        </td>
C
Chen Long 已提交
1084 1085
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1086
        <td>118</td>
C
Chen Long 已提交
1087
        <td>GhostNet_x0_5</td>
Z
zhangyubo0722 已提交
1088
        <td><a href="https://arxiv.org/abs/1911.11907">GhostNet: More Features from Cheap Operations</a></td>
C
Chen Long 已提交
1089 1090
        <td><details><summary>Abstract</summary><div>Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources. The redundancy in feature maps is an important characteristic of those successful CNNs, but has rarely been investigated in neural architecture design. This paper proposes a novel Ghost module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features. The proposed Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks. Ghost bottlenecks are designed to stack Ghost modules, and then the lightweight GhostNet can be easily established. Experiments conducted on benchmarks demonstrate that the proposed Ghost module is an impressive alternative of convolution layers in baseline models, and our GhostNet can achieve higher recognition performance (e.g. 75.7% top-1 accuracy) than MobileNetV3 with similar computational cost on the ImageNet ILSVRC-2012 classification dataset. Code is available at this https URL</div></details></td>
        <td>ImageNet/Acc 0.6688</td>
Z
zhangyubo0722 已提交
1091
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/GhostNet.md">快速开始</a></td>
1092
        </td>
C
Chen Long 已提交
1093 1094
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1095
        <td>119</td>
C
Chen Long 已提交
1096
        <td>GhostNet_x1_0</td>
Z
zhangyubo0722 已提交
1097
        <td><a href="https://arxiv.org/abs/1911.11907">GhostNet: More Features from Cheap Operations</a></td>
C
Chen Long 已提交
1098 1099
        <td><details><summary>Abstract</summary><div>Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources. The redundancy in feature maps is an important characteristic of those successful CNNs, but has rarely been investigated in neural architecture design. This paper proposes a novel Ghost module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features. The proposed Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks. Ghost bottlenecks are designed to stack Ghost modules, and then the lightweight GhostNet can be easily established. Experiments conducted on benchmarks demonstrate that the proposed Ghost module is an impressive alternative of convolution layers in baseline models, and our GhostNet can achieve higher recognition performance (e.g. 75.7% top-1 accuracy) than MobileNetV3 with similar computational cost on the ImageNet ILSVRC-2012 classification dataset. Code is available at this https URL</div></details></td>
        <td>ImageNet/Acc 0.7402</td>
Z
zhangyubo0722 已提交
1100
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/GhostNet.md">快速开始</a></td>
1101
        </td>
C
Chen Long 已提交
1102 1103
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1104
        <td>120</td>
C
Chen Long 已提交
1105
        <td>GhostNet_x1_3</td>
Z
zhangyubo0722 已提交
1106
        <td><a href="https://arxiv.org/abs/1911.11907">GhostNet: More Features from Cheap Operations</a></td>
C
Chen Long 已提交
1107 1108
        <td><details><summary>Abstract</summary><div>Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources. The redundancy in feature maps is an important characteristic of those successful CNNs, but has rarely been investigated in neural architecture design. This paper proposes a novel Ghost module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features. The proposed Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks. Ghost bottlenecks are designed to stack Ghost modules, and then the lightweight GhostNet can be easily established. Experiments conducted on benchmarks demonstrate that the proposed Ghost module is an impressive alternative of convolution layers in baseline models, and our GhostNet can achieve higher recognition performance (e.g. 75.7% top-1 accuracy) than MobileNetV3 with similar computational cost on the ImageNet ILSVRC-2012 classification dataset. Code is available at this https URL</div></details></td>
        <td>ImageNet/Acc 0.7579</td>
Z
zhangyubo0722 已提交
1109
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/GhostNet.md">快速开始</a></td>
1110
        </td>
C
Chen Long 已提交
1111 1112
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1113 1114 1115
        <td>121</td>
        <td>RegNet50</td>
        <td><a href="https://arxiv.org/abs/2101.00590">RegNet: Self-Regulated Network for Image Classification</a></td>
C
Chen Long 已提交
1116
        <td><details><summary>Abstract</summary><div>The ResNet and its variants have achieved remarkable successes in various computer vision tasks. Despite its success in making gradient flow through building blocks, the simple shortcut connection mechanism limits the ability of re-exploring new potentially complementary features due to the additive function. To address this issue, in this paper, we propose to introduce a regulator module as a memory mechanism to extract complementary features, which are further fed to the ResNet. In particular, the regulator module is composed of convolutional RNNs (e.g., Convolutional LSTMs or Convolutional GRUs), which are shown to be good at extracting Spatio-temporal information. We named the new regulated networks as RegNet. The regulator module can be easily implemented and appended to any ResNet architecture. We also apply the regulator module for improving the Squeeze-and-Excitation ResNet to show the generalization ability of our method. Experimental results on three image classification datasets have demonstrated the promising performance of the proposed architecture compared with the standard ResNet, SE-ResNet, and other state-of-the-art architectures.</div></details></td>
Z
zhangyubo0722 已提交
1117 1118
        <td>ImageNet/Acc 0.7833</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/RegNet.md">快速开始</a></td>
1119
        </td>
C
Chen Long 已提交
1120 1121
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1122
        <td>122</td>
C
Chen Long 已提交
1123
        <td>DLA169</td>
Z
zhangyubo0722 已提交
1124
        <td><a href="https://arxiv.org/abs/1707.06484">Deep Layer Aggregation</a></td>
C
Chen Long 已提交
1125 1126
        <td><details><summary>Abstract</summary><div>Visual recognition requires rich representations that span levels from low to high, scales from small to large, and resolutions from fine to coarse. Even with the depth of features in a convolutional network, a layer in isolation is not enough: compounding and aggregating these representations improves inference of what and where. Architectural efforts are exploring many dimensions for network backbones, designing deeper or wider architectures, but how to best aggregate layers and blocks across a network deserves further attention. Although skip connections have been incorporated to combine layers, these connections have been "shallow" themselves, and only fuse by simple, one-step operations. We augment standard architectures with deeper aggregation to better fuse information across layers. Our deep layer aggregation structures iteratively and hierarchically merge the feature hierarchy to make networks with better accuracy and fewer parameters. Experiments across architectures and tasks show that deep layer aggregation improves recognition and resolution compared to existing branching and merging schemes. The code is at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.7809</td>
Z
zhangyubo0722 已提交
1127
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DLA.md">快速开始</a></td>
1128
        </td>
C
Chen Long 已提交
1129 1130
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1131
        <td>123</td>
C
Chen Long 已提交
1132
        <td>DLA60x_c</td>
Z
zhangyubo0722 已提交
1133
        <td><a href="https://arxiv.org/abs/1707.06484">Deep Layer Aggregation</a></td>
C
Chen Long 已提交
1134 1135
        <td><details><summary>Abstract</summary><div>Visual recognition requires rich representations that span levels from low to high, scales from small to large, and resolutions from fine to coarse. Even with the depth of features in a convolutional network, a layer in isolation is not enough: compounding and aggregating these representations improves inference of what and where. Architectural efforts are exploring many dimensions for network backbones, designing deeper or wider architectures, but how to best aggregate layers and blocks across a network deserves further attention. Although skip connections have been incorporated to combine layers, these connections have been "shallow" themselves, and only fuse by simple, one-step operations. We augment standard architectures with deeper aggregation to better fuse information across layers. Our deep layer aggregation structures iteratively and hierarchically merge the feature hierarchy to make networks with better accuracy and fewer parameters. Experiments across architectures and tasks show that deep layer aggregation improves recognition and resolution compared to existing branching and merging schemes. The code is at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.6645</td>
Z
zhangyubo0722 已提交
1136
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DLA.md">快速开始</a></td>
1137
        </td>
C
Chen Long 已提交
1138 1139
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1140
        <td>124</td>
C
Chen Long 已提交
1141
        <td>DLA102x2</td>
Z
zhangyubo0722 已提交
1142
        <td><a href="https://arxiv.org/abs/1707.06484">Deep Layer Aggregation</a></td>
C
Chen Long 已提交
1143 1144
        <td><details><summary>Abstract</summary><div>Visual recognition requires rich representations that span levels from low to high, scales from small to large, and resolutions from fine to coarse. Even with the depth of features in a convolutional network, a layer in isolation is not enough: compounding and aggregating these representations improves inference of what and where. Architectural efforts are exploring many dimensions for network backbones, designing deeper or wider architectures, but how to best aggregate layers and blocks across a network deserves further attention. Although skip connections have been incorporated to combine layers, these connections have been "shallow" themselves, and only fuse by simple, one-step operations. We augment standard architectures with deeper aggregation to better fuse information across layers. Our deep layer aggregation structures iteratively and hierarchically merge the feature hierarchy to make networks with better accuracy and fewer parameters. Experiments across architectures and tasks show that deep layer aggregation improves recognition and resolution compared to existing branching and merging schemes. The code is at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.7885</td>
Z
zhangyubo0722 已提交
1145
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DLA.md">快速开始</a></td>
1146
        </td>
C
Chen Long 已提交
1147 1148
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1149
        <td>125</td>
C
Chen Long 已提交
1150
        <td>DLA102</td>
Z
zhangyubo0722 已提交
1151
        <td><a href="https://arxiv.org/abs/1707.06484">Deep Layer Aggregation</a></td>
C
Chen Long 已提交
1152 1153
        <td><details><summary>Abstract</summary><div>Visual recognition requires rich representations that span levels from low to high, scales from small to large, and resolutions from fine to coarse. Even with the depth of features in a convolutional network, a layer in isolation is not enough: compounding and aggregating these representations improves inference of what and where. Architectural efforts are exploring many dimensions for network backbones, designing deeper or wider architectures, but how to best aggregate layers and blocks across a network deserves further attention. Although skip connections have been incorporated to combine layers, these connections have been "shallow" themselves, and only fuse by simple, one-step operations. We augment standard architectures with deeper aggregation to better fuse information across layers. Our deep layer aggregation structures iteratively and hierarchically merge the feature hierarchy to make networks with better accuracy and fewer parameters. Experiments across architectures and tasks show that deep layer aggregation improves recognition and resolution compared to existing branching and merging schemes. The code is at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.7893</td>
Z
zhangyubo0722 已提交
1154
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DLA.md">快速开始</a></td>
1155
        </td>
C
Chen Long 已提交
1156 1157
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1158
        <td>126</td>
C
Chen Long 已提交
1159
        <td>DLA60x</td>
Z
zhangyubo0722 已提交
1160
        <td><a href="https://arxiv.org/abs/1707.06484">Deep Layer Aggregation</a></td>
C
Chen Long 已提交
1161 1162
        <td><details><summary>Abstract</summary><div>Visual recognition requires rich representations that span levels from low to high, scales from small to large, and resolutions from fine to coarse. Even with the depth of features in a convolutional network, a layer in isolation is not enough: compounding and aggregating these representations improves inference of what and where. Architectural efforts are exploring many dimensions for network backbones, designing deeper or wider architectures, but how to best aggregate layers and blocks across a network deserves further attention. Although skip connections have been incorporated to combine layers, these connections have been "shallow" themselves, and only fuse by simple, one-step operations. We augment standard architectures with deeper aggregation to better fuse information across layers. Our deep layer aggregation structures iteratively and hierarchically merge the feature hierarchy to make networks with better accuracy and fewer parameters. Experiments across architectures and tasks show that deep layer aggregation improves recognition and resolution compared to existing branching and merging schemes. The code is at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.7753</td>
Z
zhangyubo0722 已提交
1163
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DLA.md">快速开始</a></td>
1164
        </td>
C
Chen Long 已提交
1165 1166
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1167
        <td>127</td>
C
Chen Long 已提交
1168
        <td>DLA60</td>
Z
zhangyubo0722 已提交
1169
        <td><a href="https://arxiv.org/abs/1707.06484">Deep Layer Aggregation</a></td>
C
Chen Long 已提交
1170 1171
        <td><details><summary>Abstract</summary><div>Visual recognition requires rich representations that span levels from low to high, scales from small to large, and resolutions from fine to coarse. Even with the depth of features in a convolutional network, a layer in isolation is not enough: compounding and aggregating these representations improves inference of what and where. Architectural efforts are exploring many dimensions for network backbones, designing deeper or wider architectures, but how to best aggregate layers and blocks across a network deserves further attention. Although skip connections have been incorporated to combine layers, these connections have been "shallow" themselves, and only fuse by simple, one-step operations. We augment standard architectures with deeper aggregation to better fuse information across layers. Our deep layer aggregation structures iteratively and hierarchically merge the feature hierarchy to make networks with better accuracy and fewer parameters. Experiments across architectures and tasks show that deep layer aggregation improves recognition and resolution compared to existing branching and merging schemes. The code is at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.761</td>
Z
zhangyubo0722 已提交
1172
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DLA.md">快速开始</a></td>
1173
        </td>
C
Chen Long 已提交
1174 1175
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1176
        <td>128</td>
C
Chen Long 已提交
1177
        <td>DLA46_c</td>
Z
zhangyubo0722 已提交
1178
        <td><a href="https://arxiv.org/abs/1707.06484">Deep Layer Aggregation</a></td>
C
Chen Long 已提交
1179 1180
        <td><details><summary>Abstract</summary><div>Visual recognition requires rich representations that span levels from low to high, scales from small to large, and resolutions from fine to coarse. Even with the depth of features in a convolutional network, a layer in isolation is not enough: compounding and aggregating these representations improves inference of what and where. Architectural efforts are exploring many dimensions for network backbones, designing deeper or wider architectures, but how to best aggregate layers and blocks across a network deserves further attention. Although skip connections have been incorporated to combine layers, these connections have been "shallow" themselves, and only fuse by simple, one-step operations. We augment standard architectures with deeper aggregation to better fuse information across layers. Our deep layer aggregation structures iteratively and hierarchically merge the feature hierarchy to make networks with better accuracy and fewer parameters. Experiments across architectures and tasks show that deep layer aggregation improves recognition and resolution compared to existing branching and merging schemes. The code is at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.6321</td>
Z
zhangyubo0722 已提交
1181
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DLA.md">快速开始</a></td>
1182
        </td>
C
Chen Long 已提交
1183 1184
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1185
        <td>129</td>
C
Chen Long 已提交
1186
        <td>DLA34</td>
Z
zhangyubo0722 已提交
1187
        <td><a href="https://arxiv.org/abs/1707.06484">Deep Layer Aggregation</a></td>
C
Chen Long 已提交
1188 1189
        <td><details><summary>Abstract</summary><div>Visual recognition requires rich representations that span levels from low to high, scales from small to large, and resolutions from fine to coarse. Even with the depth of features in a convolutional network, a layer in isolation is not enough: compounding and aggregating these representations improves inference of what and where. Architectural efforts are exploring many dimensions for network backbones, designing deeper or wider architectures, but how to best aggregate layers and blocks across a network deserves further attention. Although skip connections have been incorporated to combine layers, these connections have been "shallow" themselves, and only fuse by simple, one-step operations. We augment standard architectures with deeper aggregation to better fuse information across layers. Our deep layer aggregation structures iteratively and hierarchically merge the feature hierarchy to make networks with better accuracy and fewer parameters. Experiments across architectures and tasks show that deep layer aggregation improves recognition and resolution compared to existing branching and merging schemes. The code is at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.7603</td>
Z
zhangyubo0722 已提交
1190
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DLA.md">快速开始</a></td>
1191
        </td>
C
Chen Long 已提交
1192 1193
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1194
        <td>130</td>
C
Chen Long 已提交
1195
        <td>DLA102x</td>
Z
zhangyubo0722 已提交
1196
        <td><a href="https://arxiv.org/abs/1707.06484">Deep Layer Aggregation</a></td>
C
Chen Long 已提交
1197 1198
        <td><details><summary>Abstract</summary><div>Visual recognition requires rich representations that span levels from low to high, scales from small to large, and resolutions from fine to coarse. Even with the depth of features in a convolutional network, a layer in isolation is not enough: compounding and aggregating these representations improves inference of what and where. Architectural efforts are exploring many dimensions for network backbones, designing deeper or wider architectures, but how to best aggregate layers and blocks across a network deserves further attention. Although skip connections have been incorporated to combine layers, these connections have been "shallow" themselves, and only fuse by simple, one-step operations. We augment standard architectures with deeper aggregation to better fuse information across layers. Our deep layer aggregation structures iteratively and hierarchically merge the feature hierarchy to make networks with better accuracy and fewer parameters. Experiments across architectures and tasks show that deep layer aggregation improves recognition and resolution compared to existing branching and merging schemes. The code is at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.781</td>
Z
zhangyubo0722 已提交
1199
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DLA.md">快速开始</a></td>
1200
        </td>
C
Chen Long 已提交
1201 1202
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1203
        <td>131</td>
C
Chen Long 已提交
1204
        <td>DLA46x_c</td>
Z
zhangyubo0722 已提交
1205
        <td><a href="https://arxiv.org/abs/1707.06484">Deep Layer Aggregation</a></td>
C
Chen Long 已提交
1206 1207
        <td><details><summary>Abstract</summary><div>Visual recognition requires rich representations that span levels from low to high, scales from small to large, and resolutions from fine to coarse. Even with the depth of features in a convolutional network, a layer in isolation is not enough: compounding and aggregating these representations improves inference of what and where. Architectural efforts are exploring many dimensions for network backbones, designing deeper or wider architectures, but how to best aggregate layers and blocks across a network deserves further attention. Although skip connections have been incorporated to combine layers, these connections have been "shallow" themselves, and only fuse by simple, one-step operations. We augment standard architectures with deeper aggregation to better fuse information across layers. Our deep layer aggregation structures iteratively and hierarchically merge the feature hierarchy to make networks with better accuracy and fewer parameters. Experiments across architectures and tasks show that deep layer aggregation improves recognition and resolution compared to existing branching and merging schemes. The code is at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.6321</td>
Z
zhangyubo0722 已提交
1208
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DLA.md">快速开始</a></td>
1209
        </td>
C
Chen Long 已提交
1210 1211
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1212
        <td>132</td>
C
Chen Long 已提交
1213
        <td>ReXNet_1_5</td>
Z
zhangyubo0722 已提交
1214
        <td><a href="https://arxiv.org/abs/2007.00992">Rethinking Channel Dimensions for Efficient Model Design</a></td>
C
Chen Long 已提交
1215 1216
        <td><details><summary>Abstract</summary><div>Designing an efficient model within the limited computational cost is challenging. We argue the accuracy of a lightweight model has been further limited by the design convention: a stage-wise configuration of the channel dimensions, which looks like a piecewise linear function of the network stage. In this paper, we study an effective channel dimension configuration towards better performance than the convention. To this end, we empirically study how to design a single layer properly by analyzing the rank of the output feature. We then investigate the channel configuration of a model by searching network architectures concerning the channel configuration under the computational cost restriction. Based on the investigation, we propose a simple yet effective channel configuration that can be parameterized by the layer index. As a result, our proposed model following the channel parameterization achieves remarkable performance on ImageNet classification and transfer learning tasks including COCO object detection, COCO instance segmentation, and fine-grained classifications. Code and ImageNet pretrained models are available at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.8006</td>
Z
zhangyubo0722 已提交
1217
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ReXNet.md">快速开始</a></td>
1218
        </td>
C
Chen Long 已提交
1219 1220
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1221
        <td>133</td>
C
Chen Long 已提交
1222
        <td>ReXNet_1_0</td>
Z
zhangyubo0722 已提交
1223
        <td><a href="https://arxiv.org/abs/2007.00992">Rethinking Channel Dimensions for Efficient Model Design</a></td>
C
Chen Long 已提交
1224 1225
        <td><details><summary>Abstract</summary><div>Designing an efficient model within the limited computational cost is challenging. We argue the accuracy of a lightweight model has been further limited by the design convention: a stage-wise configuration of the channel dimensions, which looks like a piecewise linear function of the network stage. In this paper, we study an effective channel dimension configuration towards better performance than the convention. To this end, we empirically study how to design a single layer properly by analyzing the rank of the output feature. We then investigate the channel configuration of a model by searching network architectures concerning the channel configuration under the computational cost restriction. Based on the investigation, we propose a simple yet effective channel configuration that can be parameterized by the layer index. As a result, our proposed model following the channel parameterization achieves remarkable performance on ImageNet classification and transfer learning tasks including COCO object detection, COCO instance segmentation, and fine-grained classifications. Code and ImageNet pretrained models are available at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.7746</td>
Z
zhangyubo0722 已提交
1226
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ReXNet.md">快速开始</a></td>
1227
        </td>
C
Chen Long 已提交
1228 1229
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1230
        <td>134</td>
C
Chen Long 已提交
1231
        <td>ReXNet_3_0</td>
Z
zhangyubo0722 已提交
1232
        <td><a href="https://arxiv.org/abs/2007.00992">Rethinking Channel Dimensions for Efficient Model Design</a></td>
C
Chen Long 已提交
1233 1234
        <td><details><summary>Abstract</summary><div>Designing an efficient model within the limited computational cost is challenging. We argue the accuracy of a lightweight model has been further limited by the design convention: a stage-wise configuration of the channel dimensions, which looks like a piecewise linear function of the network stage. In this paper, we study an effective channel dimension configuration towards better performance than the convention. To this end, we empirically study how to design a single layer properly by analyzing the rank of the output feature. We then investigate the channel configuration of a model by searching network architectures concerning the channel configuration under the computational cost restriction. Based on the investigation, we propose a simple yet effective channel configuration that can be parameterized by the layer index. As a result, our proposed model following the channel parameterization achieves remarkable performance on ImageNet classification and transfer learning tasks including COCO object detection, COCO instance segmentation, and fine-grained classifications. Code and ImageNet pretrained models are available at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.8209</td>
Z
zhangyubo0722 已提交
1235
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ReXNet.md">快速开始</a></td>
1236
        </td>
C
Chen Long 已提交
1237 1238
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1239
        <td>135</td>
C
Chen Long 已提交
1240
        <td>ReXNet_2_0</td>
Z
zhangyubo0722 已提交
1241
        <td><a href="https://arxiv.org/abs/2007.00992">Rethinking Channel Dimensions for Efficient Model Design</a></td>
C
Chen Long 已提交
1242 1243
        <td><details><summary>Abstract</summary><div>Designing an efficient model within the limited computational cost is challenging. We argue the accuracy of a lightweight model has been further limited by the design convention: a stage-wise configuration of the channel dimensions, which looks like a piecewise linear function of the network stage. In this paper, we study an effective channel dimension configuration towards better performance than the convention. To this end, we empirically study how to design a single layer properly by analyzing the rank of the output feature. We then investigate the channel configuration of a model by searching network architectures concerning the channel configuration under the computational cost restriction. Based on the investigation, we propose a simple yet effective channel configuration that can be parameterized by the layer index. As a result, our proposed model following the channel parameterization achieves remarkable performance on ImageNet classification and transfer learning tasks including COCO object detection, COCO instance segmentation, and fine-grained classifications. Code and ImageNet pretrained models are available at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.8122</td>
Z
zhangyubo0722 已提交
1244
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ReXNet.md">快速开始</a></td>
1245
        </td>
C
Chen Long 已提交
1246 1247
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1248
        <td>136</td>
C
Chen Long 已提交
1249
        <td>ReXNet_1_3</td>
Z
zhangyubo0722 已提交
1250
        <td><a href="https://arxiv.org/abs/2007.00992">Rethinking Channel Dimensions for Efficient Model Design</a></td>
C
Chen Long 已提交
1251 1252
        <td><details><summary>Abstract</summary><div>Designing an efficient model within the limited computational cost is challenging. We argue the accuracy of a lightweight model has been further limited by the design convention: a stage-wise configuration of the channel dimensions, which looks like a piecewise linear function of the network stage. In this paper, we study an effective channel dimension configuration towards better performance than the convention. To this end, we empirically study how to design a single layer properly by analyzing the rank of the output feature. We then investigate the channel configuration of a model by searching network architectures concerning the channel configuration under the computational cost restriction. Based on the investigation, we propose a simple yet effective channel configuration that can be parameterized by the layer index. As a result, our proposed model following the channel parameterization achieves remarkable performance on ImageNet classification and transfer learning tasks including COCO object detection, COCO instance segmentation, and fine-grained classifications. Code and ImageNet pretrained models are available at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.7913</td>
Z
zhangyubo0722 已提交
1253
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ReXNet.md">快速开始</a></td>
1254
        </td>
C
Chen Long 已提交
1255 1256
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1257
        <td>137</td>
C
Chen Long 已提交
1258
        <td>TNT_small</td>
Z
zhangyubo0722 已提交
1259
        <td><a href="https://arxiv.org/abs/2103.00112">Transformer in Transformer</a></td>
C
Chen Long 已提交
1260 1261
        <td><details><summary>Abstract</summary><div>Transformer is a new kind of neural architecture which encodes the input data as powerful features via the attention mechanism. Basically, the visual transformers first divide the input images into several local patches and then calculate both representations and their relationship. Since natural images are of high complexity with abundant detail and color information, the granularity of the patch dividing is not fine enough for excavating features of objects in different scales and locations. In this paper, we point out that the attention inside these local patches are also essential for building visual transformers with high performance and we explore a new architecture, namely, Transformer iN Transformer (TNT). Specifically, we regard the local patches (e.g., 16×16) as "visual sentences" and present to further divide them into smaller patches (e.g., 4×4) as "visual words". The attention of each word will be calculated with other words in the given visual sentence with negligible computational costs. Features of both words and sentences will be aggregated to enhance the representation ability. Experiments on several benchmarks demonstrate the effectiveness of the proposed TNT architecture, e.g., we achieve an 81.5% top-1 accuracy on the ImageNet, which is about 1.7% higher than that of the state-of-the-art visual transformer with similar computational cost. The PyTorch code is available at this https URL, and the MindSpore code is available at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.8121</td>
Z
zhangyubo0722 已提交
1262
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/TNT.md">快速开始</a></td>
1263
        </td>
C
Chen Long 已提交
1264 1265
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1266
        <td>138</td>
C
Chen Long 已提交
1267
        <td>MixNet_L</td>
Z
zhangyubo0722 已提交
1268 1269
        <td><a href="https://arxiv.org/abs/1907.09595">MixConv: Mixed Depthwise Convolutional Kernels</a></td>
        <td><details><summary>Abstract</summary><div>     Depthwise convolution is becoming increasingly popular in modern efficient ConvNets, but its kernel size is often overlooked. In this paper, we systematically study the impact of different kernel sizes, and observe that combining the benefits of multiple kernel sizes can lead to better accuracy and efficiency. Based on this observation, we propose a new mixed depthwise convolution (MixConv), which naturally mixes up multiple kernel sizes in a single convolution. As a simple drop-in replacement of vanilla depthwise convolution, our MixConv improves the accuracy and efficiency for existing MobileNets on both ImageNet classification and COCO object detection. To demonstrate the effectiveness of MixConv, we integrate it into AutoML search space and develop a new family of models, named as MixNets, which outperform previous mobile models including MobileNetV2 [20] (ImageNet top-1 accuracy +4.2%), ShuffleNetV2 [16] (+3.5%), MnasNet [26] (+1.3%), ProxylessNAS [2] (+2.2%), and FBNet [27] (+2.0%). In particular, our MixNet-L achieves a new state-of-the-art 78.9% ImageNet top-1 accuracy under typical mobile settings (600M FLOPS). Code is at this https URL tensorflow/tpu/tree/master/models/official/mnasnet/mixnet </div></details></td>
C
Chen Long 已提交
1270
        <td>ImageNet/Acc 0.786</td>
Z
zhangyubo0722 已提交
1271
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MixNet.md">快速开始</a></td>
1272
        </td>
C
Chen Long 已提交
1273 1274
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1275
        <td>139</td>
C
Chen Long 已提交
1276
        <td>MixNet_S</td>
Z
zhangyubo0722 已提交
1277 1278
        <td><a href="https://arxiv.org/abs/1907.09595">MixConv: Mixed Depthwise Convolutional Kernels</a></td>
        <td><details><summary>Abstract</summary><div>     Depthwise convolution is becoming increasingly popular in modern efficient ConvNets, but its kernel size is often overlooked. In this paper, we systematically study the impact of different kernel sizes, and observe that combining the benefits of multiple kernel sizes can lead to better accuracy and efficiency. Based on this observation, we propose a new mixed depthwise convolution (MixConv), which naturally mixes up multiple kernel sizes in a single convolution. As a simple drop-in replacement of vanilla depthwise convolution, our MixConv improves the accuracy and efficiency for existing MobileNets on both ImageNet classification and COCO object detection. To demonstrate the effectiveness of MixConv, we integrate it into AutoML search space and develop a new family of models, named as MixNets, which outperform previous mobile models including MobileNetV2 [20] (ImageNet top-1 accuracy +4.2%), ShuffleNetV2 [16] (+3.5%), MnasNet [26] (+1.3%), ProxylessNAS [2] (+2.2%), and FBNet [27] (+2.0%). In particular, our MixNet-L achieves a new state-of-the-art 78.9% ImageNet top-1 accuracy under typical mobile settings (600M FLOPS). Code is at this https URL tensorflow/tpu/tree/master/models/official/mnasnet/mixnet </div></details></td>
C
Chen Long 已提交
1279
        <td>ImageNet/Acc 0.7628</td>
Z
zhangyubo0722 已提交
1280
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MixNet.md">快速开始</a></td>
1281
        </td>
C
Chen Long 已提交
1282 1283
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1284
        <td>140</td>
C
Chen Long 已提交
1285
        <td>MixNet_M</td>
Z
zhangyubo0722 已提交
1286 1287
        <td><a href="https://arxiv.org/abs/1907.09595">MixConv: Mixed Depthwise Convolutional Kernels</a></td>
        <td><details><summary>Abstract</summary><div>     Depthwise convolution is becoming increasingly popular in modern efficient ConvNets, but its kernel size is often overlooked. In this paper, we systematically study the impact of different kernel sizes, and observe that combining the benefits of multiple kernel sizes can lead to better accuracy and efficiency. Based on this observation, we propose a new mixed depthwise convolution (MixConv), which naturally mixes up multiple kernel sizes in a single convolution. As a simple drop-in replacement of vanilla depthwise convolution, our MixConv improves the accuracy and efficiency for existing MobileNets on both ImageNet classification and COCO object detection. To demonstrate the effectiveness of MixConv, we integrate it into AutoML search space and develop a new family of models, named as MixNets, which outperform previous mobile models including MobileNetV2 [20] (ImageNet top-1 accuracy +4.2%), ShuffleNetV2 [16] (+3.5%), MnasNet [26] (+1.3%), ProxylessNAS [2] (+2.2%), and FBNet [27] (+2.0%). In particular, our MixNet-L achieves a new state-of-the-art 78.9% ImageNet top-1 accuracy under typical mobile settings (600M FLOPS). Code is at this https URL tensorflow/tpu/tree/master/models/official/mnasnet/mixnet </div></details></td>
C
Chen Long 已提交
1288
        <td>ImageNet/Acc 0.7767</td>
Z
zhangyubo0722 已提交
1289
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MixNet.md">快速开始</a></td>
1290
        </td>
C
Chen Long 已提交
1291 1292
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1293
        <td>141</td>
C
Chen Long 已提交
1294
        <td>ResNeSt50</td>
Z
zhangyubo0722 已提交
1295
        <td><a href="https://arxiv.org/abs/2004.08955">ResNeSt: Split-Attention Networks</a></td>
C
Chen Long 已提交
1296 1297
        <td><details><summary>Abstract</summary><div>While image classification models have recently continuedto advance, most downstream applications such as object detection andsemantic segmentation still employ ResNet variants as the backbone net-work due to their simple and modular structure. We present a modularSplit-Attention block that enables attention across feature-map groups.By stacking these Split-Attention blocks ResNet-style, we obtain a newResNet variant which we call ResNeSt. Our network preserves the over-all ResNet structure to be used in downstream tasks straightforwardlywithout introducing additional computational costs.ResNeSt models outperform other networks with similar model com-plexities. For example, ResNeSt-50 achieves 81.13% top-1 accuracy onImageNet using a single crop-size of 224 × 224, outperforming previ-ous best ResNet variant by more than 1% accuracy. This improvementalso helps downstream tasks including object detection, instance segmen-tation and semantic segmentation. For example, by simply replace theResNet-50 backbone with ResNeSt-50, we improve the mAP of Faster-RCNN on MS-COCO from 39.3% to 42.3% and the mIoU for DeeplabV3on ADE20K from 42.1% to 45.1%1</div></details></td>
        <td>ImageNet/Acc 0.8083</td>
Z
zhangyubo0722 已提交
1298
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNeSt.md">快速开始</a></td>
1299
        </td>
C
Chen Long 已提交
1300 1301
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1302
        <td>142</td>
C
Chen Long 已提交
1303
        <td>ResNeSt50_fast_1s1x64</br>d</td>
Z
zhangyubo0722 已提交
1304
        <td><a href="https://arxiv.org/abs/2004.08955">ResNeSt: Split-Attention Networks</a></td>
C
Chen Long 已提交
1305 1306
        <td><details><summary>Abstract</summary><div>While image classification models have recently continuedto advance, most downstream applications such as object detection andsemantic segmentation still employ ResNet variants as the backbone net-work due to their simple and modular structure. We present a modularSplit-Attention block that enables attention across feature-map groups.By stacking these Split-Attention blocks ResNet-style, we obtain a newResNet variant which we call ResNeSt. Our network preserves the over-all ResNet structure to be used in downstream tasks straightforwardlywithout introducing additional computational costs.ResNeSt models outperform other networks with similar model com-plexities. For example, ResNeSt-50 achieves 81.13% top-1 accuracy onImageNet using a single crop-size of 224 × 224, outperforming previ-ous best ResNet variant by more than 1% accuracy. This improvementalso helps downstream tasks including object detection, instance segmen-tation and semantic segmentation. For example, by simply replace theResNet-50 backbone with ResNeSt-50, we improve the mAP of Faster-RCNN on MS-COCO from 39.3% to 42.3% and the mIoU for DeeplabV3on ADE20K from 42.1% to 45.1%1</div></details></td>
        <td>ImageNet/Acc 0.8035</td>
Z
zhangyubo0722 已提交
1307
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ResNeSt.md">快速开始</a></td>
1308
        </td>
C
Chen Long 已提交
1309 1310
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1311
        <td>143</td>
C
Chen Long 已提交
1312
        <td>RedNet152</td>
Z
zhangyubo0722 已提交
1313
        <td><a href="https://arxiv.org/abs/2103.06255">Involution: Inverting the Inherence of Convolution for Visual Recognition</a></td>
C
Chen Long 已提交
1314 1315
        <td><details><summary>Abstract</summary><div>Convolution has been the core ingredient of modern neural networks, triggering the surge of deep learning in vision. In this work, we rethink the inherent principles of standard convolution for vision tasks, specifically spatial-agnostic and channel-specific. Instead, we present a novel atomic operation for deep neural networks by inverting the aforementioned design principles of convolution, coined as involution. We additionally demystify the recent popular self-attention operator and subsume it into our involution family as an over-complicated instantiation. The proposed involution operator could be leveraged as fundamental bricks to build the new generation of neural networks for visual recognition, powering different deep learning models on several prevalent benchmarks, including ImageNet classification, COCO detection and segmentation, together with Cityscapes segmentation. Our involution-based models improve the performance of convolutional baselines using ResNet-50 by up to 1.6% top-1 accuracy, 2.5% and 2.4% bounding box AP, and 4.7% mean IoU absolutely while compressing the computational cost to 66%, 65%, 72%, and 57% on the above benchmarks, respectively. Code and pre-trained models for all the tasks are available at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.7917</td>
Z
zhangyubo0722 已提交
1316
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/RedNet.md">快速开始</a></td>
1317
        </td>
C
Chen Long 已提交
1318 1319
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1320
        <td>144</td>
C
Chen Long 已提交
1321
        <td>RedNet38</td>
Z
zhangyubo0722 已提交
1322
        <td><a href="https://arxiv.org/abs/2103.06255">Involution: Inverting the Inherence of Convolution for Visual Recognition</a></td>
C
Chen Long 已提交
1323 1324
        <td><details><summary>Abstract</summary><div>Convolution has been the core ingredient of modern neural networks, triggering the surge of deep learning in vision. In this work, we rethink the inherent principles of standard convolution for vision tasks, specifically spatial-agnostic and channel-specific. Instead, we present a novel atomic operation for deep neural networks by inverting the aforementioned design principles of convolution, coined as involution. We additionally demystify the recent popular self-attention operator and subsume it into our involution family as an over-complicated instantiation. The proposed involution operator could be leveraged as fundamental bricks to build the new generation of neural networks for visual recognition, powering different deep learning models on several prevalent benchmarks, including ImageNet classification, COCO detection and segmentation, together with Cityscapes segmentation. Our involution-based models improve the performance of convolutional baselines using ResNet-50 by up to 1.6% top-1 accuracy, 2.5% and 2.4% bounding box AP, and 4.7% mean IoU absolutely while compressing the computational cost to 66%, 65%, 72%, and 57% on the above benchmarks, respectively. Code and pre-trained models for all the tasks are available at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.7747</td>
Z
zhangyubo0722 已提交
1325
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/RedNet.md">快速开始</a></td>
1326
        </td>
C
Chen Long 已提交
1327 1328
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1329
        <td>145</td>
C
Chen Long 已提交
1330
        <td>RedNet101</td>
Z
zhangyubo0722 已提交
1331
        <td><a href="https://arxiv.org/abs/2103.06255">Involution: Inverting the Inherence of Convolution for Visual Recognition</a></td>
C
Chen Long 已提交
1332 1333
        <td><details><summary>Abstract</summary><div>Convolution has been the core ingredient of modern neural networks, triggering the surge of deep learning in vision. In this work, we rethink the inherent principles of standard convolution for vision tasks, specifically spatial-agnostic and channel-specific. Instead, we present a novel atomic operation for deep neural networks by inverting the aforementioned design principles of convolution, coined as involution. We additionally demystify the recent popular self-attention operator and subsume it into our involution family as an over-complicated instantiation. The proposed involution operator could be leveraged as fundamental bricks to build the new generation of neural networks for visual recognition, powering different deep learning models on several prevalent benchmarks, including ImageNet classification, COCO detection and segmentation, together with Cityscapes segmentation. Our involution-based models improve the performance of convolutional baselines using ResNet-50 by up to 1.6% top-1 accuracy, 2.5% and 2.4% bounding box AP, and 4.7% mean IoU absolutely while compressing the computational cost to 66%, 65%, 72%, and 57% on the above benchmarks, respectively. Code and pre-trained models for all the tasks are available at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.7894</td>
Z
zhangyubo0722 已提交
1334
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/RedNet.md">快速开始</a></td>
1335
        </td>
C
Chen Long 已提交
1336 1337
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1338
        <td>146</td>
C
Chen Long 已提交
1339
        <td>RedNet26</td>
Z
zhangyubo0722 已提交
1340
        <td><a href="https://arxiv.org/abs/2103.06255">Involution: Inverting the Inherence of Convolution for Visual Recognition</a></td>
C
Chen Long 已提交
1341 1342
        <td><details><summary>Abstract</summary><div>Convolution has been the core ingredient of modern neural networks, triggering the surge of deep learning in vision. In this work, we rethink the inherent principles of standard convolution for vision tasks, specifically spatial-agnostic and channel-specific. Instead, we present a novel atomic operation for deep neural networks by inverting the aforementioned design principles of convolution, coined as involution. We additionally demystify the recent popular self-attention operator and subsume it into our involution family as an over-complicated instantiation. The proposed involution operator could be leveraged as fundamental bricks to build the new generation of neural networks for visual recognition, powering different deep learning models on several prevalent benchmarks, including ImageNet classification, COCO detection and segmentation, together with Cityscapes segmentation. Our involution-based models improve the performance of convolutional baselines using ResNet-50 by up to 1.6% top-1 accuracy, 2.5% and 2.4% bounding box AP, and 4.7% mean IoU absolutely while compressing the computational cost to 66%, 65%, 72%, and 57% on the above benchmarks, respectively. Code and pre-trained models for all the tasks are available at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.7595</td>
Z
zhangyubo0722 已提交
1343
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/RedNet.md">快速开始</a></td>
1344
        </td>
C
Chen Long 已提交
1345 1346
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1347
        <td>147</td>
C
Chen Long 已提交
1348
        <td>RedNet50</td>
Z
zhangyubo0722 已提交
1349
        <td><a href="https://arxiv.org/abs/2103.06255">Involution: Inverting the Inherence of Convolution for Visual Recognition</a></td>
C
Chen Long 已提交
1350 1351
        <td><details><summary>Abstract</summary><div>Convolution has been the core ingredient of modern neural networks, triggering the surge of deep learning in vision. In this work, we rethink the inherent principles of standard convolution for vision tasks, specifically spatial-agnostic and channel-specific. Instead, we present a novel atomic operation for deep neural networks by inverting the aforementioned design principles of convolution, coined as involution. We additionally demystify the recent popular self-attention operator and subsume it into our involution family as an over-complicated instantiation. The proposed involution operator could be leveraged as fundamental bricks to build the new generation of neural networks for visual recognition, powering different deep learning models on several prevalent benchmarks, including ImageNet classification, COCO detection and segmentation, together with Cityscapes segmentation. Our involution-based models improve the performance of convolutional baselines using ResNet-50 by up to 1.6% top-1 accuracy, 2.5% and 2.4% bounding box AP, and 4.7% mean IoU absolutely while compressing the computational cost to 66%, 65%, 72%, and 57% on the above benchmarks, respectively. Code and pre-trained models for all the tasks are available at this https URL. </div></details></td>
        <td>ImageNet/Acc 0.7833</td>
Z
zhangyubo0722 已提交
1352
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/RedNet.md">快速开始</a></td>
1353
        </td>
C
Chen Long 已提交
1354 1355
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1356
        <td>148</td>
C
Chen Long 已提交
1357
        <td>LeViT_128S</td>
Z
zhangyubo0722 已提交
1358
        <td><a href="https://arxiv.org/abs/2104.01136">LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference</a></td>
C
Chen Long 已提交
1359 1360
        <td><details><summary>Abstract</summary><div>We design a family of image classification architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures, which are competitive on highly parallel processing hardware. We revisit principles from the extensive literature on convolutional neural networks to apply them to transformers, in particular activation maps with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information in vision transformers. As a result, we propose LeVIT: a hybrid neural network for fast inference image classification. We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU. We release the code at this https URL</div></details></td>
        <td>ImageNet/Acc 0.7598</td>
Z
zhangyubo0722 已提交
1361
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/LeViT.md">快速开始</a></td>
1362
        </td>
C
Chen Long 已提交
1363 1364
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1365
        <td>149</td>
C
Chen Long 已提交
1366
        <td>LeViT_256</td>
Z
zhangyubo0722 已提交
1367
        <td><a href="https://arxiv.org/abs/2104.01136">LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference</a></td>
C
Chen Long 已提交
1368 1369
        <td><details><summary>Abstract</summary><div>We design a family of image classification architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures, which are competitive on highly parallel processing hardware. We revisit principles from the extensive literature on convolutional neural networks to apply them to transformers, in particular activation maps with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information in vision transformers. As a result, we propose LeVIT: a hybrid neural network for fast inference image classification. We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU. We release the code at this https URL</div></details></td>
        <td>ImageNet/Acc 0.8085</td>
Z
zhangyubo0722 已提交
1370
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/LeViT.md">快速开始</a></td>
1371
        </td>
C
Chen Long 已提交
1372 1373
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1374
        <td>150</td>
C
Chen Long 已提交
1375
        <td>LeViT_192</td>
Z
zhangyubo0722 已提交
1376
        <td><a href="https://arxiv.org/abs/2104.01136">LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference</a></td>
C
Chen Long 已提交
1377 1378
        <td><details><summary>Abstract</summary><div>We design a family of image classification architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures, which are competitive on highly parallel processing hardware. We revisit principles from the extensive literature on convolutional neural networks to apply them to transformers, in particular activation maps with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information in vision transformers. As a result, we propose LeVIT: a hybrid neural network for fast inference image classification. We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU. We release the code at this https URL</div></details></td>
        <td>ImageNet/Acc 0.7598</td>
Z
zhangyubo0722 已提交
1379
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/LeViT.md">快速开始</a></td>
1380
        </td>
C
Chen Long 已提交
1381 1382
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1383
        <td>151</td>
C
Chen Long 已提交
1384
        <td>LeViT_128</td>
Z
zhangyubo0722 已提交
1385
        <td><a href="https://arxiv.org/abs/2104.01136">LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference</a></td>
C
Chen Long 已提交
1386 1387
        <td><details><summary>Abstract</summary><div>We design a family of image classification architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures, which are competitive on highly parallel processing hardware. We revisit principles from the extensive literature on convolutional neural networks to apply them to transformers, in particular activation maps with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information in vision transformers. As a result, we propose LeVIT: a hybrid neural network for fast inference image classification. We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU. We release the code at this https URL</div></details></td>
        <td>ImageNet/Acc 0.7598</td>
Z
zhangyubo0722 已提交
1388
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/LeViT.md">快速开始</a></td>
1389
        </td>
C
Chen Long 已提交
1390 1391
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1392
        <td>152</td>
C
Chen Long 已提交
1393
        <td>LeViT_384</td>
Z
zhangyubo0722 已提交
1394
        <td><a href="https://arxiv.org/abs/2104.01136">LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference</a></td>
C
Chen Long 已提交
1395 1396
        <td><details><summary>Abstract</summary><div>We design a family of image classification architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures, which are competitive on highly parallel processing hardware. We revisit principles from the extensive literature on convolutional neural networks to apply them to transformers, in particular activation maps with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information in vision transformers. As a result, we propose LeVIT: a hybrid neural network for fast inference image classification. We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU. We release the code at this https URL</div></details></td>
        <td>ImageNet/Acc 0.8191</td>
Z
zhangyubo0722 已提交
1397
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/LeViT.md">快速开始</a></td>
1398
        </td>
C
Chen Long 已提交
1399 1400
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1401
        <td>153</td>
C
Chen Long 已提交
1402
        <td>alt_gvt_large</td>
Z
zhangyubo0722 已提交
1403
        <td><a href="https://arxiv.org/abs/2104.13840">Twins: Revisiting the Design of Spatial Attention in Vision Transformers</a></td>
C
Chen Long 已提交
1404 1405
        <td><details><summary>Abstract</summary><div>Very recently, a variety of vision transformer architectures for dense prediction tasks have been proposed and they show that the design of spatial attention is critical to their success in these tasks. In this work, we revisit the design of the spatial attention and demonstrate that a carefully-devised yet simple spatial attention mechanism performs favourably against the state-of-the-art schemes. As a result, we propose two vision transformer architectures, namely, Twins-PCPVT and Twins-SVT. Our proposed architectures are highly-efficient and easy to implement, only involving matrix multiplications that are highly optimized in modern deep learning frameworks. More importantly, the proposed architectures achieve excellent performance on a wide range of visual tasks, including image level classification as well as dense detection and segmentation. The simplicity and strong performance suggest that our proposed architectures may serve as stronger backbones for many vision tasks. Our code is released at this https URL . </div></details></td>
        <td>ImageNet/Acc 0.8331</td>
Z
zhangyubo0722 已提交
1406
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Twins.md">快速开始</a></td>
1407
        </td>
C
Chen Long 已提交
1408 1409
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1410
        <td>154</td>
C
Chen Long 已提交
1411
        <td>pcpvt_large</td>
Z
zhangyubo0722 已提交
1412
        <td><a href="https://arxiv.org/abs/2104.13840">Twins: Revisiting the Design of Spatial Attention in Vision Transformers</a></td>
C
Chen Long 已提交
1413 1414
        <td><details><summary>Abstract</summary><div>Very recently, a variety of vision transformer architectures for dense prediction tasks have been proposed and they show that the design of spatial attention is critical to their success in these tasks. In this work, we revisit the design of the spatial attention and demonstrate that a carefully-devised yet simple spatial attention mechanism performs favourably against the state-of-the-art schemes. As a result, we propose two vision transformer architectures, namely, Twins-PCPVT and Twins-SVT. Our proposed architectures are highly-efficient and easy to implement, only involving matrix multiplications that are highly optimized in modern deep learning frameworks. More importantly, the proposed architectures achieve excellent performance on a wide range of visual tasks, including image level classification as well as dense detection and segmentation. The simplicity and strong performance suggest that our proposed architectures may serve as stronger backbones for many vision tasks. Our code is released at this https URL . </div></details></td>
        <td>ImageNet/Acc 0.8273</td>
Z
zhangyubo0722 已提交
1415
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Twins.md">快速开始</a></td>
1416
        </td>
C
Chen Long 已提交
1417 1418
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1419
        <td>155</td>
C
Chen Long 已提交
1420
        <td>alt_gvt_small</td>
Z
zhangyubo0722 已提交
1421
        <td><a href="https://arxiv.org/abs/2104.13840">Twins: Revisiting the Design of Spatial Attention in Vision Transformers</a></td>
C
Chen Long 已提交
1422 1423
        <td><details><summary>Abstract</summary><div>Very recently, a variety of vision transformer architectures for dense prediction tasks have been proposed and they show that the design of spatial attention is critical to their success in these tasks. In this work, we revisit the design of the spatial attention and demonstrate that a carefully-devised yet simple spatial attention mechanism performs favourably against the state-of-the-art schemes. As a result, we propose two vision transformer architectures, namely, Twins-PCPVT and Twins-SVT. Our proposed architectures are highly-efficient and easy to implement, only involving matrix multiplications that are highly optimized in modern deep learning frameworks. More importantly, the proposed architectures achieve excellent performance on a wide range of visual tasks, including image level classification as well as dense detection and segmentation. The simplicity and strong performance suggest that our proposed architectures may serve as stronger backbones for many vision tasks. Our code is released at this https URL . </div></details></td>
        <td>ImageNet/Acc 0.814</td>
Z
zhangyubo0722 已提交
1424
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Twins.md">快速开始</a></td>
1425
        </td>
C
Chen Long 已提交
1426 1427
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1428
        <td>156</td>
C
Chen Long 已提交
1429
        <td>pcpvt_base</td>
Z
zhangyubo0722 已提交
1430
        <td><a href="https://arxiv.org/abs/2104.13840">Twins: Revisiting the Design of Spatial Attention in Vision Transformers</a></td>
C
Chen Long 已提交
1431 1432
        <td><details><summary>Abstract</summary><div>Very recently, a variety of vision transformer architectures for dense prediction tasks have been proposed and they show that the design of spatial attention is critical to their success in these tasks. In this work, we revisit the design of the spatial attention and demonstrate that a carefully-devised yet simple spatial attention mechanism performs favourably against the state-of-the-art schemes. As a result, we propose two vision transformer architectures, namely, Twins-PCPVT and Twins-SVT. Our proposed architectures are highly-efficient and easy to implement, only involving matrix multiplications that are highly optimized in modern deep learning frameworks. More importantly, the proposed architectures achieve excellent performance on a wide range of visual tasks, including image level classification as well as dense detection and segmentation. The simplicity and strong performance suggest that our proposed architectures may serve as stronger backbones for many vision tasks. Our code is released at this https URL . </div></details></td>
        <td>ImageNet/Acc 0.8242</td>
Z
zhangyubo0722 已提交
1433
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Twins.md">快速开始</a></td>
1434
        </td>
C
Chen Long 已提交
1435 1436
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1437
        <td>157</td>
C
Chen Long 已提交
1438
        <td>pcpvt_small</td>
Z
zhangyubo0722 已提交
1439
        <td><a href="https://arxiv.org/abs/2104.13840">Twins: Revisiting the Design of Spatial Attention in Vision Transformers</a></td>
C
Chen Long 已提交
1440 1441
        <td><details><summary>Abstract</summary><div>Very recently, a variety of vision transformer architectures for dense prediction tasks have been proposed and they show that the design of spatial attention is critical to their success in these tasks. In this work, we revisit the design of the spatial attention and demonstrate that a carefully-devised yet simple spatial attention mechanism performs favourably against the state-of-the-art schemes. As a result, we propose two vision transformer architectures, namely, Twins-PCPVT and Twins-SVT. Our proposed architectures are highly-efficient and easy to implement, only involving matrix multiplications that are highly optimized in modern deep learning frameworks. More importantly, the proposed architectures achieve excellent performance on a wide range of visual tasks, including image level classification as well as dense detection and segmentation. The simplicity and strong performance suggest that our proposed architectures may serve as stronger backbones for many vision tasks. Our code is released at this https URL . </div></details></td>
        <td>ImageNet/Acc 0.8082</td>
Z
zhangyubo0722 已提交
1442
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Twins.md">快速开始</a></td>
1443
        </td>
C
Chen Long 已提交
1444 1445
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1446
        <td>158</td>
C
Chen Long 已提交
1447
        <td>alt_gvt_base</td>
Z
zhangyubo0722 已提交
1448
        <td><a href="https://arxiv.org/abs/2104.13840">Twins: Revisiting the Design of Spatial Attention in Vision Transformers</a></td>
C
Chen Long 已提交
1449 1450
        <td><details><summary>Abstract</summary><div>Very recently, a variety of vision transformer architectures for dense prediction tasks have been proposed and they show that the design of spatial attention is critical to their success in these tasks. In this work, we revisit the design of the spatial attention and demonstrate that a carefully-devised yet simple spatial attention mechanism performs favourably against the state-of-the-art schemes. As a result, we propose two vision transformer architectures, namely, Twins-PCPVT and Twins-SVT. Our proposed architectures are highly-efficient and easy to implement, only involving matrix multiplications that are highly optimized in modern deep learning frameworks. More importantly, the proposed architectures achieve excellent performance on a wide range of visual tasks, including image level classification as well as dense detection and segmentation. The simplicity and strong performance suggest that our proposed architectures may serve as stronger backbones for many vision tasks. Our code is released at this https URL . </div></details></td>
        <td>ImageNet/Acc 0.8294</td>
Z
zhangyubo0722 已提交
1451
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/Twins.md">快速开始</a></td>
1452
        </td>
C
Chen Long 已提交
1453 1454
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1455
        <td>159</td>
C
Chen Long 已提交
1456
        <td>ESNet_x0_5</td>
Z
zhangyubo0722 已提交
1457 1458
        <td><a href="https://arxiv.org/abs/2111.00902">PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices</a></td>
        <td><details><summary>Abstract</summary><div>The better accuracy and efficiency trade-off has been achallenging problem in object detection. In this work, we are dedicated to studying key optimizations and neural net- work architecture choices for object detection to improve accuracy and efficiency. We investigate the applicability of the anchor-free strategy on lightweight object detection models. We enhance the backbone structure and design the lightweight structure of the neck, which improves the feature extraction ability of the network. We improve la- bel assignment strategy and loss function to make training more stable and efficient. Through these optimizations, we create a new family of real-time object detectors, named PP-PicoDet, which achieves superior performance on ob- ject detection for mobile devices. Our models achieve bet- ter trade-offs between accuracy and latency compared to other popular models. PicoDet-S with only 0.99M param- eters achieves 30.6% mAP, which is an absolute 4.8% im- provement in mAP while reducing mobile CPU inference latency by 55% compared to YOLOX-Nano, and is an ab- solute 7.1% improvement in mAP compared to NanoDet. It reaches 123 FPS (150 FPS using Paddle Lite) on mobile ARM CPU when the input size is 320. PicoDet-L with only 3.3M parameters achieves 40.9% mAP, which is an absolute 3.7% improvement in mAP and 44% faster than YOLOv5s. As shown in Figure 1, our models far outperform the state- of-the-art results for lightweight object detection. Code and pre-trained models are available at PaddleDetection1.1</div></details></td>
C
Chen Long 已提交
1459
        <td>ImageNet/Acc 0.6882</td>
Z
zhangyubo0722 已提交
1460
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ESNet.md">快速开始</a></td>
1461
        </td>
C
Chen Long 已提交
1462 1463
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1464
        <td>160</td>
C
Chen Long 已提交
1465
        <td>ESNet_x0_75</td>
Z
zhangyubo0722 已提交
1466 1467
        <td><a href="https://arxiv.org/abs/2111.00902">PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices</a></td>
        <td><details><summary>Abstract</summary><div>The better accuracy and efficiency trade-off has been achallenging problem in object detection. In this work, we are dedicated to studying key optimizations and neural net- work architecture choices for object detection to improve accuracy and efficiency. We investigate the applicability of the anchor-free strategy on lightweight object detection models. We enhance the backbone structure and design the lightweight structure of the neck, which improves the feature extraction ability of the network. We improve la- bel assignment strategy and loss function to make training more stable and efficient. Through these optimizations, we create a new family of real-time object detectors, named PP-PicoDet, which achieves superior performance on ob- ject detection for mobile devices. Our models achieve bet- ter trade-offs between accuracy and latency compared to other popular models. PicoDet-S with only 0.99M param- eters achieves 30.6% mAP, which is an absolute 4.8% im- provement in mAP while reducing mobile CPU inference latency by 55% compared to YOLOX-Nano, and is an ab- solute 7.1% improvement in mAP compared to NanoDet. It reaches 123 FPS (150 FPS using Paddle Lite) on mobile ARM CPU when the input size is 320. PicoDet-L with only 3.3M parameters achieves 40.9% mAP, which is an absolute 3.7% improvement in mAP and 44% faster than YOLOv5s. As shown in Figure 1, our models far outperform the state- of-the-art results for lightweight object detection. Code and pre-trained models are available at PaddleDetection1.1</div></details></td>
C
Chen Long 已提交
1468
        <td>ImageNet/Acc 0.7224</td>
Z
zhangyubo0722 已提交
1469
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ESNet.md">快速开始</a></td>
1470
        </td>
C
Chen Long 已提交
1471 1472
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1473
        <td>161</td>
C
Chen Long 已提交
1474
        <td>ESNet_x1_0</td>
Z
zhangyubo0722 已提交
1475 1476
        <td><a href="https://arxiv.org/abs/2111.00902">PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices</a></td>
        <td><details><summary>Abstract</summary><div>The better accuracy and efficiency trade-off has been achallenging problem in object detection. In this work, we are dedicated to studying key optimizations and neural net- work architecture choices for object detection to improve accuracy and efficiency. We investigate the applicability of the anchor-free strategy on lightweight object detection models. We enhance the backbone structure and design the lightweight structure of the neck, which improves the feature extraction ability of the network. We improve la- bel assignment strategy and loss function to make training more stable and efficient. Through these optimizations, we create a new family of real-time object detectors, named PP-PicoDet, which achieves superior performance on ob- ject detection for mobile devices. Our models achieve bet- ter trade-offs between accuracy and latency compared to other popular models. PicoDet-S with only 0.99M param- eters achieves 30.6% mAP, which is an absolute 4.8% im- provement in mAP while reducing mobile CPU inference latency by 55% compared to YOLOX-Nano, and is an ab- solute 7.1% improvement in mAP compared to NanoDet. It reaches 123 FPS (150 FPS using Paddle Lite) on mobile ARM CPU when the input size is 320. PicoDet-L with only 3.3M parameters achieves 40.9% mAP, which is an absolute 3.7% improvement in mAP and 44% faster than YOLOv5s. As shown in Figure 1, our models far outperform the state- of-the-art results for lightweight object detection. Code and pre-trained models are available at PaddleDetection1.1</div></details></td>
C
Chen Long 已提交
1477
        <td>ImageNet/Acc 0.7392</td>
Z
zhangyubo0722 已提交
1478
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ESNet.md">快速开始</a></td>
1479
        </td>
C
Chen Long 已提交
1480 1481
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1482
        <td>162</td>
C
Chen Long 已提交
1483
        <td>ESNet_x0_25</td>
Z
zhangyubo0722 已提交
1484 1485
        <td><a href="https://arxiv.org/abs/2111.00902">PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices</a></td>
        <td><details><summary>Abstract</summary><div>The better accuracy and efficiency trade-off has been achallenging problem in object detection. In this work, we are dedicated to studying key optimizations and neural net- work architecture choices for object detection to improve accuracy and efficiency. We investigate the applicability of the anchor-free strategy on lightweight object detection models. We enhance the backbone structure and design the lightweight structure of the neck, which improves the feature extraction ability of the network. We improve la- bel assignment strategy and loss function to make training more stable and efficient. Through these optimizations, we create a new family of real-time object detectors, named PP-PicoDet, which achieves superior performance on ob- ject detection for mobile devices. Our models achieve bet- ter trade-offs between accuracy and latency compared to other popular models. PicoDet-S with only 0.99M param- eters achieves 30.6% mAP, which is an absolute 4.8% im- provement in mAP while reducing mobile CPU inference latency by 55% compared to YOLOX-Nano, and is an ab- solute 7.1% improvement in mAP compared to NanoDet. It reaches 123 FPS (150 FPS using Paddle Lite) on mobile ARM CPU when the input size is 320. PicoDet-L with only 3.3M parameters achieves 40.9% mAP, which is an absolute 3.7% improvement in mAP and 44% faster than YOLOv5s. As shown in Figure 1, our models far outperform the state- of-the-art results for lightweight object detection. Code and pre-trained models are available at PaddleDetection1.1</div></details></td>
C
Chen Long 已提交
1486
        <td>ImageNet/Acc 0.6248</td>
Z
zhangyubo0722 已提交
1487
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ESNet.md">快速开始</a></td>
1488
        </td>
C
Chen Long 已提交
1489 1490
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1491
        <td>163</td>
C
Chen Long 已提交
1492
        <td>HarDNet68_ds</td>
Z
zhangyubo0722 已提交
1493
        <td><a href="https://arxiv.org/abs/1909.00948">HarDNet: A Low Memory Traffic Network</a></td>
C
Chen Long 已提交
1494 1495
        <td><details><summary>Abstract</summary><div>State-of-the-art neural network architectures such as ResNet, MobileNet, and DenseNet have achieved outstanding accuracy over low MACs and small model size counterparts. However, these metrics might not be accurate for predicting the inference time. We suggest that memory traffic for accessing intermediate feature maps can be a factor dominating the inference latency, especially in such tasks as real-time object detection and semantic segmentation of high-resolution video. We propose a Harmonic Densely Connected Network to achieve high efficiency in terms of both low MACs and memory traffic. The new network achieves 35%, 36%, 30%, 32%, and 45% inference time reduction compared with FC-DenseNet-103, DenseNet-264, ResNet-50, ResNet-152, and SSD-VGG, respectively. We use tools including Nvidia profiler and ARM Scale-Sim to measure the memory traffic and verify that the inference latency is indeed proportional to the memory traffic consumption and the proposed network consumes low memory traffic. We conclude that one should take memory traffic into consideration when designing neural network architectures for high-resolution applications at the edge. </div></details></td>
        <td>ImageNet/Acc 0.7362</td>
Z
zhangyubo0722 已提交
1496
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/HarDNet.md">快速开始</a></td>
1497
        </td>
C
Chen Long 已提交
1498 1499
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1500
        <td>164</td>
C
Chen Long 已提交
1501
        <td>HarDNet85</td>
Z
zhangyubo0722 已提交
1502
        <td><a href="https://arxiv.org/abs/1909.00948">HarDNet: A Low Memory Traffic Network</a></td>
C
Chen Long 已提交
1503 1504
        <td><details><summary>Abstract</summary><div>State-of-the-art neural network architectures such as ResNet, MobileNet, and DenseNet have achieved outstanding accuracy over low MACs and small model size counterparts. However, these metrics might not be accurate for predicting the inference time. We suggest that memory traffic for accessing intermediate feature maps can be a factor dominating the inference latency, especially in such tasks as real-time object detection and semantic segmentation of high-resolution video. We propose a Harmonic Densely Connected Network to achieve high efficiency in terms of both low MACs and memory traffic. The new network achieves 35%, 36%, 30%, 32%, and 45% inference time reduction compared with FC-DenseNet-103, DenseNet-264, ResNet-50, ResNet-152, and SSD-VGG, respectively. We use tools including Nvidia profiler and ARM Scale-Sim to measure the memory traffic and verify that the inference latency is indeed proportional to the memory traffic consumption and the proposed network consumes low memory traffic. We conclude that one should take memory traffic into consideration when designing neural network architectures for high-resolution applications at the edge. </div></details></td>
        <td>ImageNet/Acc 0.7744</td>
Z
zhangyubo0722 已提交
1505
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/HarDNet.md">快速开始</a></td>
1506
        </td>
C
Chen Long 已提交
1507 1508
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1509
        <td>165</td>
C
Chen Long 已提交
1510
        <td>HarDNet68</td>
Z
zhangyubo0722 已提交
1511
        <td><a href="https://arxiv.org/abs/1909.00948">HarDNet: A Low Memory Traffic Network</a></td>
C
Chen Long 已提交
1512 1513
        <td><details><summary>Abstract</summary><div>State-of-the-art neural network architectures such as ResNet, MobileNet, and DenseNet have achieved outstanding accuracy over low MACs and small model size counterparts. However, these metrics might not be accurate for predicting the inference time. We suggest that memory traffic for accessing intermediate feature maps can be a factor dominating the inference latency, especially in such tasks as real-time object detection and semantic segmentation of high-resolution video. We propose a Harmonic Densely Connected Network to achieve high efficiency in terms of both low MACs and memory traffic. The new network achieves 35%, 36%, 30%, 32%, and 45% inference time reduction compared with FC-DenseNet-103, DenseNet-264, ResNet-50, ResNet-152, and SSD-VGG, respectively. We use tools including Nvidia profiler and ARM Scale-Sim to measure the memory traffic and verify that the inference latency is indeed proportional to the memory traffic consumption and the proposed network consumes low memory traffic. We conclude that one should take memory traffic into consideration when designing neural network architectures for high-resolution applications at the edge. </div></details></td>
        <td>ImageNet/Acc 0.7546</td>
Z
zhangyubo0722 已提交
1514
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/HarDNet.md">快速开始</a></td>
1515
        </td>
C
Chen Long 已提交
1516 1517
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1518
        <td>166</td>
C
Chen Long 已提交
1519
        <td>HarDNet39_ds</td>
Z
zhangyubo0722 已提交
1520
        <td><a href="https://arxiv.org/abs/1909.00948">HarDNet: A Low Memory Traffic Network</a></td>
C
Chen Long 已提交
1521 1522
        <td><details><summary>Abstract</summary><div>State-of-the-art neural network architectures such as ResNet, MobileNet, and DenseNet have achieved outstanding accuracy over low MACs and small model size counterparts. However, these metrics might not be accurate for predicting the inference time. We suggest that memory traffic for accessing intermediate feature maps can be a factor dominating the inference latency, especially in such tasks as real-time object detection and semantic segmentation of high-resolution video. We propose a Harmonic Densely Connected Network to achieve high efficiency in terms of both low MACs and memory traffic. The new network achieves 35%, 36%, 30%, 32%, and 45% inference time reduction compared with FC-DenseNet-103, DenseNet-264, ResNet-50, ResNet-152, and SSD-VGG, respectively. We use tools including Nvidia profiler and ARM Scale-Sim to measure the memory traffic and verify that the inference latency is indeed proportional to the memory traffic consumption and the proposed network consumes low memory traffic. We conclude that one should take memory traffic into consideration when designing neural network architectures for high-resolution applications at the edge. </div></details></td>
        <td>ImageNet/Acc 0.7133</td>
Z
zhangyubo0722 已提交
1523
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/HarDNet.md">快速开始</a></td>
1524
        </td>
C
Chen Long 已提交
1525 1526
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1527
        <td>167</td>
C
Chen Long 已提交
1528
        <td>ViT_base_patch16_224</td>
Z
zhangyubo0722 已提交
1529
        <td><a href="https://arxiv.org/abs/2010.11929">An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale</a></td>
C
Chen Long 已提交
1530 1531
        <td><details><summary>Abstract</summary><div>While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.</div></details></td>
        <td>ImageNet/Acc 0.8195</td>
Z
zhangyubo0722 已提交
1532
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ViT.md">快速开始</a></td>
1533
        </td>
C
Chen Long 已提交
1534 1535
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1536
        <td>168</td>
C
Chen Long 已提交
1537
        <td>ViT_base_patch16_384</td>
Z
zhangyubo0722 已提交
1538
        <td><a href="https://arxiv.org/abs/2010.11929">An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale</a></td>
C
Chen Long 已提交
1539 1540
        <td><details><summary>Abstract</summary><div>While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.</div></details></td>
        <td>ImageNet/Acc 0.8414</td>
Z
zhangyubo0722 已提交
1541
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ViT.md">快速开始</a></td>
1542
        </td>
C
Chen Long 已提交
1543 1544
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1545
        <td>169</td>
C
Chen Long 已提交
1546
        <td>ViT_base_patch32_384</td>
Z
zhangyubo0722 已提交
1547
        <td><a href="https://arxiv.org/abs/2010.11929">An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale</a></td>
C
Chen Long 已提交
1548 1549
        <td><details><summary>Abstract</summary><div>While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.</div></details></td>
        <td>ImageNet/Acc 0.8176</td>
Z
zhangyubo0722 已提交
1550
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ViT.md">快速开始</a></td>
1551
        </td>
C
Chen Long 已提交
1552 1553
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1554
        <td>170</td>
C
Chen Long 已提交
1555
        <td>ViT_large_patch16_224</td>
Z
zhangyubo0722 已提交
1556
        <td><a href="https://arxiv.org/abs/2010.11929">An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale</a></td>
C
Chen Long 已提交
1557 1558
        <td><details><summary>Abstract</summary><div>While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.</div></details></td>
        <td>ImageNet/Acc 0.8323</td>
Z
zhangyubo0722 已提交
1559
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ViT.md">快速开始</a></td>
1560
        </td>
C
Chen Long 已提交
1561 1562
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1563
        <td>171</td>
C
Chen Long 已提交
1564
        <td>ViT_large_patch16_384</td>
Z
zhangyubo0722 已提交
1565
        <td><a href="https://arxiv.org/abs/2010.11929">An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale</a></td>
C
Chen Long 已提交
1566 1567
        <td><details><summary>Abstract</summary><div>While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.</div></details></td>
        <td>ImageNet/Acc 0.8513</td>
Z
zhangyubo0722 已提交
1568
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ViT.md">快速开始</a></td>
1569
        </td>
C
Chen Long 已提交
1570 1571
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1572
        <td>172</td>
C
Chen Long 已提交
1573
        <td>ViT_large_patch32_384</td>
Z
zhangyubo0722 已提交
1574
        <td><a href="https://arxiv.org/abs/2010.11929">An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale</a></td>
C
Chen Long 已提交
1575 1576
        <td><details><summary>Abstract</summary><div>While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.</div></details></td>
        <td>ImageNet/Acc 0.8153</td>
Z
zhangyubo0722 已提交
1577
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ViT.md">快速开始</a></td>
1578
        </td>
C
Chen Long 已提交
1579 1580
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1581
        <td>173</td>
C
Chen Long 已提交
1582
        <td>ViT_small_patch16_224</td>
Z
zhangyubo0722 已提交
1583
        <td><a href="https://arxiv.org/abs/2010.11929">An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale</a></td>
C
Chen Long 已提交
1584 1585
        <td><details><summary>Abstract</summary><div>While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.</div></details></td>
        <td>ImageNet/Acc 0.7769</td>
Z
zhangyubo0722 已提交
1586
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ViT.md">快速开始</a></td>
1587
        </td>
C
Chen Long 已提交
1588 1589
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1590
        <td>174</td>
C
Chen Long 已提交
1591
        <td>DeiT_base_patch16_224</td>
Z
zhangyubo0722 已提交
1592
        <td><a href="https://arxiv.org/abs/2012.12877">Training data-efficient image transformers & distillation through attention</a></td>
C
Chen Long 已提交
1593 1594
        <td><details><summary>Abstract</summary><div>Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. However, these visual transformers are pre-trained with hundreds of millions of images using an expensive infrastructure, thereby limiting their adoption.In this work, we produce a competitive convolution-free transformer by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external data.More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention. We show the interest of this token-based distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and models.</div></details></td>
        <td>ImageNet/Acc 0.817</td>
Z
zhangyubo0722 已提交
1595
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DeiT.md">快速开始</a></td>
1596
        </td>
C
Chen Long 已提交
1597 1598
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1599
        <td>175</td>
C
Chen Long 已提交
1600
        <td>DeiT_base_patch16_384</td>
Z
zhangyubo0722 已提交
1601
        <td><a href="https://arxiv.org/abs/2012.12877">Training data-efficient image transformers & distillation through attention</a></td>
C
Chen Long 已提交
1602 1603
        <td><details><summary>Abstract</summary><div>Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. However, these visual transformers are pre-trained with hundreds of millions of images using an expensive infrastructure, thereby limiting their adoption.In this work, we produce a competitive convolution-free transformer by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external data.More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention. We show the interest of this token-based distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and models.</div></details></td>
        <td>ImageNet/Acc 0.83</td>
Z
zhangyubo0722 已提交
1604
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DeiT.md">快速开始</a></td>
1605
        </td>
C
Chen Long 已提交
1606 1607
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1608
        <td>176</td>
C
Chen Long 已提交
1609
        <td>DeiT_small_patch16_22</br>4</td>
Z
zhangyubo0722 已提交
1610
        <td><a href="https://arxiv.org/abs/2012.12877">Training data-efficient image transformers & distillation through attention</a></td>
C
Chen Long 已提交
1611 1612
        <td><details><summary>Abstract</summary><div>Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. However, these visual transformers are pre-trained with hundreds of millions of images using an expensive infrastructure, thereby limiting their adoption.In this work, we produce a competitive convolution-free transformer by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external data.More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention. We show the interest of this token-based distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and models.</div></details></td>
        <td>ImageNet/Acc 0.796</td>
Z
zhangyubo0722 已提交
1613
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DeiT.md">快速开始</a></td>
1614
        </td>
C
Chen Long 已提交
1615 1616
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1617
        <td>177</td>
C
Chen Long 已提交
1618
        <td>DeiT_tiny_patch16_224</td>
Z
zhangyubo0722 已提交
1619
        <td><a href="https://arxiv.org/abs/2012.12877">Training data-efficient image transformers & distillation through attention</a></td>
C
Chen Long 已提交
1620 1621
        <td><details><summary>Abstract</summary><div>Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. However, these visual transformers are pre-trained with hundreds of millions of images using an expensive infrastructure, thereby limiting their adoption.In this work, we produce a competitive convolution-free transformer by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external data.More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention. We show the interest of this token-based distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and models.</div></details></td>
        <td>ImageNet/Acc 0.718</td>
Z
zhangyubo0722 已提交
1622
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/DeiT.md">快速开始</a></td>
1623
        </td>
C
Chen Long 已提交
1624 1625
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1626
        <td>178</td>
C
Chen Long 已提交
1627
        <td>SwinTransformer_base_</br>patch4_window12_384</td>
Z
zhangyubo0722 已提交
1628
        <td><a href="https://arxiv.org/abs/2103.14030">Swin Transformer: Hierarchical Vision Transformer using Shifted Windows</a></td>
C
Chen Long 已提交
1629 1630
        <td><details><summary>Abstract</summary><div>This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with \textbf{S}hifted \textbf{win}dows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures. The code and models are publicly available at~\url{this https URL}.</div></details></td>
        <td>ImageNet/Acc 0.8439</td>
Z
zhangyubo0722 已提交
1631
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/SwinTransformer.md">快速开始</a></td>
1632
        </td>
C
Chen Long 已提交
1633 1634
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1635
        <td>179</td>
C
Chen Long 已提交
1636
        <td>SwinTransformer_base_</br>patch4_window7_224</td>
Z
zhangyubo0722 已提交
1637
        <td><a href="https://arxiv.org/abs/2103.14030">Swin Transformer: Hierarchical Vision Transformer using Shifted Windows</a></td>
C
Chen Long 已提交
1638 1639
        <td><details><summary>Abstract</summary><div>This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with \textbf{S}hifted \textbf{win}dows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures. The code and models are publicly available at~\url{this https URL}.</div></details></td>
        <td>ImageNet/Acc 0.83</td>
Z
zhangyubo0722 已提交
1640
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/SwinTransformer.md">快速开始</a></td>
1641
        </td>
C
Chen Long 已提交
1642 1643
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1644
        <td>180</td>
C
Chen Long 已提交
1645
        <td>SwinTransformer_large</br>_patch4_window12_384</td>
Z
zhangyubo0722 已提交
1646
        <td><a href="https://arxiv.org/abs/2103.14030">Swin Transformer: Hierarchical Vision Transformer using Shifted Windows</a></td>
C
Chen Long 已提交
1647 1648
        <td><details><summary>Abstract</summary><div>This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with \textbf{S}hifted \textbf{win}dows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures. The code and models are publicly available at~\url{this https URL}.</div></details></td>
        <td>ImageNet/Acc 0.8642</td>
Z
zhangyubo0722 已提交
1649
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/SwinTransformer.md">快速开始</a></td>
1650
        </td>
C
Chen Long 已提交
1651 1652
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1653
        <td>181</td>
C
Chen Long 已提交
1654
        <td>SwinTransformer_large</br>_patch4_window7_224</td>
Z
zhangyubo0722 已提交
1655
        <td><a href="https://arxiv.org/abs/2103.14030">Swin Transformer: Hierarchical Vision Transformer using Shifted Windows</a></td>
C
Chen Long 已提交
1656 1657
        <td><details><summary>Abstract</summary><div>This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with \textbf{S}hifted \textbf{win}dows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures. The code and models are publicly available at~\url{this https URL}.</div></details></td>
        <td>ImageNet/Acc 0.8596</td>
Z
zhangyubo0722 已提交
1658
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/SwinTransformer.md">快速开始</a></td>
1659
        </td>
C
Chen Long 已提交
1660 1661
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1662
        <td>182</td>
C
Chen Long 已提交
1663
        <td>SwinTransformer_small</br>_patch4_window7_224</td>
Z
zhangyubo0722 已提交
1664
        <td><a href="https://arxiv.org/abs/2103.14030">Swin Transformer: Hierarchical Vision Transformer using Shifted Windows</a></td>
C
Chen Long 已提交
1665 1666
        <td><details><summary>Abstract</summary><div>This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with \textbf{S}hifted \textbf{win}dows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures. The code and models are publicly available at~\url{this https URL}.</div></details></td>
        <td>ImageNet/Acc 0.8275</td>
Z
zhangyubo0722 已提交
1667
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/SwinTransformer.md">快速开始</a></td>
1668
        </td>
C
Chen Long 已提交
1669 1670
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1671
        <td>183</td>
C
Chen Long 已提交
1672
        <td>SwinTransformer_tiny_</br>patch4_window7_224</td>
Z
zhangyubo0722 已提交
1673
        <td><a href="https://arxiv.org/abs/2103.14030">Swin Transformer: Hierarchical Vision Transformer using Shifted Windows</a></td>
C
Chen Long 已提交
1674 1675
        <td><details><summary>Abstract</summary><div>This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with \textbf{S}hifted \textbf{win}dows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures. The code and models are publicly available at~\url{this https URL}.</div></details></td>
        <td>ImageNet/Acc 0.8069</td>
Z
zhangyubo0722 已提交
1676
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/SwinTransformer.md">快速开始</a></td>
1677
        </td>
C
Chen Long 已提交
1678 1679
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1680
        <td>184</td>
1681
        <td>CSWinTransformer_base</br>_224</td>
Z
zhangyubo0722 已提交
1682
        <td><a href="https://arxiv.org/abs/2107.00652">CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows</a></td>
1683
        <td><details><summary>Abstract</summary><div>We present CSWin Transformer, an efficient and effective Transformer-based backbone for general-purpose visiontasks. A challenging issue in Transformer design is thatglobal self-attention is very expensive to compute whereaslocal self-attention often limits the field of interactions ofeach token. To address this issue, we develop the CrossShaped Window self-attention mechanism for computingself-attention in the horizontal and vertical stripes in parallelthat form a cross-shaped window, with each stripe obtainedby splitting the input feature into stripes of equal width. Weprovide a mathematical analysis of the effect of the stripewidth and vary the stripe width for different layers of theTransformer network which achieves strong modeling capability while limiting the computation cost. We also introduceLocally-enhanced Positional Encoding (LePE), which handles the local positional information better than existingencoding schemes. LePE naturally supports arbitrary inputresolutions, and is thus especially effective and friendly fordownstream tasks. Incorporated with these designs and a hierarchical structure, CSWin Transformer demonstrates competitive performance on common vision tasks. Specifically,it achieves 85.4% Top-1 accuracy on ImageNet-1K withoutany extra training data or label, 53.9 box AP and 46.4 maskAP on the COCO detection task, and 52.2 mIOU on theADE20K semantic segmentation task, surpassing previousstate-of-the-art Swin Transformer backbone by +1.2, +2.0,+1.4, and +2.0 respectively under the similar FLOPs setting.By further pretraining on the larger dataset ImageNet-21K,we achieve 87.5% Top-1 accuracy on ImageNet-1K and highsegmentation performance on ADE20K with 55.7 mIoU.</div></details></td>
Z
zhangyubo0722 已提交
1684 1685
        <td>ImageNet/Acc 0.8281</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/CSWinTransformer.md">快速开始</a></td>
1686
        </td>
C
Chen Long 已提交
1687 1688
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1689
        <td>185</td>
1690
        <td>CSWinTransformer_base</br>_384</td>
Z
zhangyubo0722 已提交
1691
        <td><a href="https://arxiv.org/abs/2107.00652">CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows</a></td>
1692
        <td><details><summary>Abstract</summary><div>We present CSWin Transformer, an efficient and effective Transformer-based backbone for general-purpose visiontasks. A challenging issue in Transformer design is thatglobal self-attention is very expensive to compute whereaslocal self-attention often limits the field of interactions ofeach token. To address this issue, we develop the CrossShaped Window self-attention mechanism for computingself-attention in the horizontal and vertical stripes in parallelthat form a cross-shaped window, with each stripe obtainedby splitting the input feature into stripes of equal width. Weprovide a mathematical analysis of the effect of the stripewidth and vary the stripe width for different layers of theTransformer network which achieves strong modeling capability while limiting the computation cost. We also introduceLocally-enhanced Positional Encoding (LePE), which handles the local positional information better than existingencoding schemes. LePE naturally supports arbitrary inputresolutions, and is thus especially effective and friendly fordownstream tasks. Incorporated with these designs and a hierarchical structure, CSWin Transformer demonstrates competitive performance on common vision tasks. Specifically,it achieves 85.4% Top-1 accuracy on ImageNet-1K withoutany extra training data or label, 53.9 box AP and 46.4 maskAP on the COCO detection task, and 52.2 mIOU on theADE20K semantic segmentation task, surpassing previousstate-of-the-art Swin Transformer backbone by +1.2, +2.0,+1.4, and +2.0 respectively under the similar FLOPs setting.By further pretraining on the larger dataset ImageNet-21K,we achieve 87.5% Top-1 accuracy on ImageNet-1K and highsegmentation performance on ADE20K with 55.7 mIoU.</div></details></td>
Z
zhangyubo0722 已提交
1693 1694
        <td>ImageNet/Acc 0.8358</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/CSWinTransformer.md">快速开始</a></td>
1695
        </td>
C
Chen Long 已提交
1696 1697
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1698
        <td>186</td>
1699
        <td>CSWinTransformer_larg</br>e_224</td>
Z
zhangyubo0722 已提交
1700
        <td><a href="https://arxiv.org/abs/2107.00652">CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows</a></td>
1701
        <td><details><summary>Abstract</summary><div>We present CSWin Transformer, an efficient and effective Transformer-based backbone for general-purpose visiontasks. A challenging issue in Transformer design is thatglobal self-attention is very expensive to compute whereaslocal self-attention often limits the field of interactions ofeach token. To address this issue, we develop the CrossShaped Window self-attention mechanism for computingself-attention in the horizontal and vertical stripes in parallelthat form a cross-shaped window, with each stripe obtainedby splitting the input feature into stripes of equal width. Weprovide a mathematical analysis of the effect of the stripewidth and vary the stripe width for different layers of theTransformer network which achieves strong modeling capability while limiting the computation cost. We also introduceLocally-enhanced Positional Encoding (LePE), which handles the local positional information better than existingencoding schemes. LePE naturally supports arbitrary inputresolutions, and is thus especially effective and friendly fordownstream tasks. Incorporated with these designs and a hierarchical structure, CSWin Transformer demonstrates competitive performance on common vision tasks. Specifically,it achieves 85.4% Top-1 accuracy on ImageNet-1K withoutany extra training data or label, 53.9 box AP and 46.4 maskAP on the COCO detection task, and 52.2 mIOU on theADE20K semantic segmentation task, surpassing previousstate-of-the-art Swin Transformer backbone by +1.2, +2.0,+1.4, and +2.0 respectively under the similar FLOPs setting.By further pretraining on the larger dataset ImageNet-21K,we achieve 87.5% Top-1 accuracy on ImageNet-1K and highsegmentation performance on ADE20K with 55.7 mIoU.</div></details></td>
Z
zhangyubo0722 已提交
1702 1703
        <td>ImageNet/Acc 0.842</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/CSWinTransformer.md">快速开始</a></td>
1704
        </td>
C
Chen Long 已提交
1705 1706
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1707
        <td>187</td>
1708
        <td>CSWinTransformer_larg</br>e_384</td>
Z
zhangyubo0722 已提交
1709
        <td><a href="https://arxiv.org/abs/2107.00652">CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows</a></td>
1710
        <td><details><summary>Abstract</summary><div>We present CSWin Transformer, an efficient and effective Transformer-based backbone for general-purpose visiontasks. A challenging issue in Transformer design is thatglobal self-attention is very expensive to compute whereaslocal self-attention often limits the field of interactions ofeach token. To address this issue, we develop the CrossShaped Window self-attention mechanism for computingself-attention in the horizontal and vertical stripes in parallelthat form a cross-shaped window, with each stripe obtainedby splitting the input feature into stripes of equal width. Weprovide a mathematical analysis of the effect of the stripewidth and vary the stripe width for different layers of theTransformer network which achieves strong modeling capability while limiting the computation cost. We also introduceLocally-enhanced Positional Encoding (LePE), which handles the local positional information better than existingencoding schemes. LePE naturally supports arbitrary inputresolutions, and is thus especially effective and friendly fordownstream tasks. Incorporated with these designs and a hierarchical structure, CSWin Transformer demonstrates competitive performance on common vision tasks. Specifically,it achieves 85.4% Top-1 accuracy on ImageNet-1K withoutany extra training data or label, 53.9 box AP and 46.4 maskAP on the COCO detection task, and 52.2 mIOU on theADE20K semantic segmentation task, surpassing previousstate-of-the-art Swin Transformer backbone by +1.2, +2.0,+1.4, and +2.0 respectively under the similar FLOPs setting.By further pretraining on the larger dataset ImageNet-21K,we achieve 87.5% Top-1 accuracy on ImageNet-1K and highsegmentation performance on ADE20K with 55.7 mIoU.</div></details></td>
Z
zhangyubo0722 已提交
1711 1712
        <td>ImageNet/Acc 0.8643</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/CSWinTransformer.md">快速开始</a></td>
1713
        </td>
C
Chen Long 已提交
1714 1715
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1716
        <td>188</td>
1717
        <td>CSWinTransformer_smal</br>l_224</td>
Z
zhangyubo0722 已提交
1718
        <td><a href="https://arxiv.org/abs/2107.00652">CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows</a></td>
1719
        <td><details><summary>Abstract</summary><div>We present CSWin Transformer, an efficient and effective Transformer-based backbone for general-purpose visiontasks. A challenging issue in Transformer design is thatglobal self-attention is very expensive to compute whereaslocal self-attention often limits the field of interactions ofeach token. To address this issue, we develop the CrossShaped Window self-attention mechanism for computingself-attention in the horizontal and vertical stripes in parallelthat form a cross-shaped window, with each stripe obtainedby splitting the input feature into stripes of equal width. Weprovide a mathematical analysis of the effect of the stripewidth and vary the stripe width for different layers of theTransformer network which achieves strong modeling capability while limiting the computation cost. We also introduceLocally-enhanced Positional Encoding (LePE), which handles the local positional information better than existingencoding schemes. LePE naturally supports arbitrary inputresolutions, and is thus especially effective and friendly fordownstream tasks. Incorporated with these designs and a hierarchical structure, CSWin Transformer demonstrates competitive performance on common vision tasks. Specifically,it achieves 85.4% Top-1 accuracy on ImageNet-1K withoutany extra training data or label, 53.9 box AP and 46.4 maskAP on the COCO detection task, and 52.2 mIOU on theADE20K semantic segmentation task, surpassing previousstate-of-the-art Swin Transformer backbone by +1.2, +2.0,+1.4, and +2.0 respectively under the similar FLOPs setting.By further pretraining on the larger dataset ImageNet-21K,we achieve 87.5% Top-1 accuracy on ImageNet-1K and highsegmentation performance on ADE20K with 55.7 mIoU.</div></details></td>
Z
zhangyubo0722 已提交
1720 1721
        <td>ImageNet/Acc 0.855</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/CSWinTransformer.md">快速开始</a></td>
1722
        </td>
C
Chen Long 已提交
1723 1724
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1725
        <td>189</td>
1726
        <td>CSWinTransformer_tiny</br>_224</td>
Z
zhangyubo0722 已提交
1727
        <td><a href="https://arxiv.org/abs/2107.00652">CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows</a></td>
1728
        <td><details><summary>Abstract</summary><div>We present CSWin Transformer, an efficient and effective Transformer-based backbone for general-purpose visiontasks. A challenging issue in Transformer design is thatglobal self-attention is very expensive to compute whereaslocal self-attention often limits the field of interactions ofeach token. To address this issue, we develop the CrossShaped Window self-attention mechanism for computingself-attention in the horizontal and vertical stripes in parallelthat form a cross-shaped window, with each stripe obtainedby splitting the input feature into stripes of equal width. Weprovide a mathematical analysis of the effect of the stripewidth and vary the stripe width for different layers of theTransformer network which achieves strong modeling capability while limiting the computation cost. We also introduceLocally-enhanced Positional Encoding (LePE), which handles the local positional information better than existingencoding schemes. LePE naturally supports arbitrary inputresolutions, and is thus especially effective and friendly fordownstream tasks. Incorporated with these designs and a hierarchical structure, CSWin Transformer demonstrates competitive performance on common vision tasks. Specifically,it achieves 85.4% Top-1 accuracy on ImageNet-1K withoutany extra training data or label, 53.9 box AP and 46.4 maskAP on the COCO detection task, and 52.2 mIOU on theADE20K semantic segmentation task, surpassing previousstate-of-the-art Swin Transformer backbone by +1.2, +2.0,+1.4, and +2.0 respectively under the similar FLOPs setting.By further pretraining on the larger dataset ImageNet-21K,we achieve 87.5% Top-1 accuracy on ImageNet-1K and highsegmentation performance on ADE20K with 55.7 mIoU.</div></details></td>
Z
zhangyubo0722 已提交
1729 1730
        <td>ImageNet/Acc 0.855</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/CSWinTransformer.md">快速开始</a></td>
1731
        </td>
C
Chen Long 已提交
1732 1733
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1734
        <td>190</td>
1735
        <td>PVT_V2_B0</td>
Z
zhangyubo0722 已提交
1736 1737 1738 1739
        <td><a href="https://arxiv.org/abs/2106.13797">PVTv2: Improved Baselines with Pyramid Vision Transformer</a></td>
        <td><details><summary>Abstract</summary><div>Transformer recently has presented encouraging progress in computer vision. In this work, we present new baselines by improving the original Pyramid Vision Transformer (PVT v1) by adding three designs, including (1) linear complexity attention layer, (2) overlapping patch embedding, and (3) convolutional feed-forward network. With these modifications, PVT v2 reduces the computational complexity of PVT v1 to linear and achieves significant improvements on fundamental vision tasks such as classification, detection, and segmentation. Notably, the proposed PVT v2 achieves comparable or better performances than recent works such as Swin Transformer. We hope this work will facilitate state-of-the-art Transformer researches in computer vision. Code is available at this https URL.</div></details></td>
        <td>ImageNet/Acc 0.705</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/PVTV2.md">快速开始</a></td>
1740
        </td>
C
Chen Long 已提交
1741 1742
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1743
        <td>191</td>
1744
        <td>PVT_V2_B1</td>
Z
zhangyubo0722 已提交
1745 1746 1747 1748
        <td><a href="https://arxiv.org/abs/2106.13797">PVTv2: Improved Baselines with Pyramid Vision Transformer</a></td>
        <td><details><summary>Abstract</summary><div>Transformer recently has presented encouraging progress in computer vision. In this work, we present new baselines by improving the original Pyramid Vision Transformer (PVT v1) by adding three designs, including (1) linear complexity attention layer, (2) overlapping patch embedding, and (3) convolutional feed-forward network. With these modifications, PVT v2 reduces the computational complexity of PVT v1 to linear and achieves significant improvements on fundamental vision tasks such as classification, detection, and segmentation. Notably, the proposed PVT v2 achieves comparable or better performances than recent works such as Swin Transformer. We hope this work will facilitate state-of-the-art Transformer researches in computer vision. Code is available at this https URL.</div></details></td>
        <td>ImageNet/Acc 0.787</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/PVTV2.md">快速开始</a></td>
1749
        </td>
C
Chen Long 已提交
1750 1751
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1752
        <td>192</td>
1753
        <td>PVT_V2_B2</td>
Z
zhangyubo0722 已提交
1754 1755 1756 1757
        <td><a href="https://arxiv.org/abs/2106.13797">PVTv2: Improved Baselines with Pyramid Vision Transformer</a></td>
        <td><details><summary>Abstract</summary><div>Transformer recently has presented encouraging progress in computer vision. In this work, we present new baselines by improving the original Pyramid Vision Transformer (PVT v1) by adding three designs, including (1) linear complexity attention layer, (2) overlapping patch embedding, and (3) convolutional feed-forward network. With these modifications, PVT v2 reduces the computational complexity of PVT v1 to linear and achieves significant improvements on fundamental vision tasks such as classification, detection, and segmentation. Notably, the proposed PVT v2 achieves comparable or better performances than recent works such as Swin Transformer. We hope this work will facilitate state-of-the-art Transformer researches in computer vision. Code is available at this https URL.</div></details></td>
        <td>ImageNet/Acc 0.821</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/PVTV2.md">快速开始</a></td>
1758
        </td>
C
Chen Long 已提交
1759 1760
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1761
        <td>193</td>
1762
        <td>PVT_V2_B2_Linear</td>
Z
zhangyubo0722 已提交
1763 1764 1765 1766
        <td><a href="https://arxiv.org/abs/2106.13797">PVTv2: Improved Baselines with Pyramid Vision Transformer</a></td>
        <td><details><summary>Abstract</summary><div>Transformer recently has presented encouraging progress in computer vision. In this work, we present new baselines by improving the original Pyramid Vision Transformer (PVT v1) by adding three designs, including (1) linear complexity attention layer, (2) overlapping patch embedding, and (3) convolutional feed-forward network. With these modifications, PVT v2 reduces the computational complexity of PVT v1 to linear and achieves significant improvements on fundamental vision tasks such as classification, detection, and segmentation. Notably, the proposed PVT v2 achieves comparable or better performances than recent works such as Swin Transformer. We hope this work will facilitate state-of-the-art Transformer researches in computer vision. Code is available at this https URL.</div></details></td>
        <td>ImageNet/Acc 0.821</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/PVTV2.md">快速开始</a></td>
1767
        </td>
C
Chen Long 已提交
1768 1769
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1770
        <td>194</td>
1771
        <td>PVT_V2_B3</td>
Z
zhangyubo0722 已提交
1772 1773 1774 1775
        <td><a href="https://arxiv.org/abs/2106.13797">PVTv2: Improved Baselines with Pyramid Vision Transformer</a></td>
        <td><details><summary>Abstract</summary><div>Transformer recently has presented encouraging progress in computer vision. In this work, we present new baselines by improving the original Pyramid Vision Transformer (PVT v1) by adding three designs, including (1) linear complexity attention layer, (2) overlapping patch embedding, and (3) convolutional feed-forward network. With these modifications, PVT v2 reduces the computational complexity of PVT v1 to linear and achieves significant improvements on fundamental vision tasks such as classification, detection, and segmentation. Notably, the proposed PVT v2 achieves comparable or better performances than recent works such as Swin Transformer. We hope this work will facilitate state-of-the-art Transformer researches in computer vision. Code is available at this https URL.</div></details></td>
        <td>ImageNet/Acc 0.831</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/PVTV2.md">快速开始</a></td>
1776
        </td>
C
Chen Long 已提交
1777 1778
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1779
        <td>195</td>
1780
        <td>PVT_V2_B4</td>
Z
zhangyubo0722 已提交
1781 1782 1783 1784
        <td><a href="https://arxiv.org/abs/2106.13797">PVTv2: Improved Baselines with Pyramid Vision Transformer</a></td>
        <td><details><summary>Abstract</summary><div>Transformer recently has presented encouraging progress in computer vision. In this work, we present new baselines by improving the original Pyramid Vision Transformer (PVT v1) by adding three designs, including (1) linear complexity attention layer, (2) overlapping patch embedding, and (3) convolutional feed-forward network. With these modifications, PVT v2 reduces the computational complexity of PVT v1 to linear and achieves significant improvements on fundamental vision tasks such as classification, detection, and segmentation. Notably, the proposed PVT v2 achieves comparable or better performances than recent works such as Swin Transformer. We hope this work will facilitate state-of-the-art Transformer researches in computer vision. Code is available at this https URL.</div></details></td>
        <td>ImageNet/Acc 0.836</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/PVTV2.md">快速开始</a></td>
1785
        </td>
C
Chen Long 已提交
1786 1787
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1788
        <td>196</td>
1789
        <td>PVT_V2_B5</td>
Z
zhangyubo0722 已提交
1790 1791 1792 1793
        <td><a href="https://arxiv.org/abs/2106.13797">PVTv2: Improved Baselines with Pyramid Vision Transformer</a></td>
        <td><details><summary>Abstract</summary><div>Transformer recently has presented encouraging progress in computer vision. In this work, we present new baselines by improving the original Pyramid Vision Transformer (PVT v1) by adding three designs, including (1) linear complexity attention layer, (2) overlapping patch embedding, and (3) convolutional feed-forward network. With these modifications, PVT v2 reduces the computational complexity of PVT v1 to linear and achieves significant improvements on fundamental vision tasks such as classification, detection, and segmentation. Notably, the proposed PVT v2 achieves comparable or better performances than recent works such as Swin Transformer. We hope this work will facilitate state-of-the-art Transformer researches in computer vision. Code is available at this https URL.</div></details></td>
        <td>ImageNet/Acc 0.837</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/PVTV2.md">快速开始</a></td>
1794
        </td>
C
Chen Long 已提交
1795 1796
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1797
        <td>197</td>
1798
        <td>MobileViT_XXS</td>
Z
zhangyubo0722 已提交
1799
        <td><a href="https://arxiv.org/abs/2110.02178">MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer</a></td>
1800
        <td><details><summary>Abstract</summary><div>Light-weight convolutional neural networks (CNNs) are the de-facto for mobile vision tasks. Their spatial inductive biases allow them to learn representations with fewer parameters across different vision tasks. However, these networks are spatially local. To learn global representations, self-attention-based vision transformers (ViTs) have been adopted. Unlike CNNs, ViTs are heavy-weight. In this paper, we ask the following question: is it possible to combine the strengths of CNNs and ViTs to build a light-weight and low latency network for mobile vision tasks? Towards this end, we introduce MobileViT, a light-weight and general-purpose vision transformer for mobile devices. MobileViT presents a different perspective for the global processing of information with transformers. Our results show that MobileViT significantly outperforms CNNand ViT-based networks across different tasks and datasets. On the ImageNet-1k dataset, MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters, which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based)and DeIT (ViT-based) for a similar number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than MobileNetv3 for a similar number of parameters. Our source code is open-source and available at:https://github.com/apple/ml-cvnets</div></details></td>
Z
zhangyubo0722 已提交
1801 1802
        <td>ImageNet/Acc 0.6867</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileViT.md">快速开始</a></td>
1803
        </td>
C
Chen Long 已提交
1804 1805
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1806
        <td>198</td>
1807
        <td>MobileViT_XS</td>
Z
zhangyubo0722 已提交
1808
        <td><a href="https://arxiv.org/abs/2110.02178">MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer</a></td>
1809
        <td><details><summary>Abstract</summary><div>Light-weight convolutional neural networks (CNNs) are the de-facto for mobile vision tasks. Their spatial inductive biases allow them to learn representations with fewer parameters across different vision tasks. However, these networks are spatially local. To learn global representations, self-attention-based vision transformers (ViTs) have been adopted. Unlike CNNs, ViTs are heavy-weight. In this paper, we ask the following question: is it possible to combine the strengths of CNNs and ViTs to build a light-weight and low latency network for mobile vision tasks? Towards this end, we introduce MobileViT, a light-weight and general-purpose vision transformer for mobile devices. MobileViT presents a different perspective for the global processing of information with transformers. Our results show that MobileViT significantly outperforms CNNand ViT-based networks across different tasks and datasets. On the ImageNet-1k dataset, MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters, which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based)and DeIT (ViT-based) for a similar number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than MobileNetv3 for a similar number of parameters. Our source code is open-source and available at:https://github.com/apple/ml-cvnets</div></details></td>
Z
zhangyubo0722 已提交
1810 1811
        <td>ImageNet/Acc 0.7454</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileViT.md">快速开始</a></td>
1812
        </td>
C
Chen Long 已提交
1813 1814
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1815
        <td>199</td>
1816
        <td>MobileViT_S</td>
Z
zhangyubo0722 已提交
1817
        <td><a href="https://arxiv.org/abs/2110.02178">MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer</a></td>
1818
        <td><details><summary>Abstract</summary><div>Light-weight convolutional neural networks (CNNs) are the de-facto for mobile vision tasks. Their spatial inductive biases allow them to learn representations with fewer parameters across different vision tasks. However, these networks are spatially local. To learn global representations, self-attention-based vision transformers (ViTs) have been adopted. Unlike CNNs, ViTs are heavy-weight. In this paper, we ask the following question: is it possible to combine the strengths of CNNs and ViTs to build a light-weight and low latency network for mobile vision tasks? Towards this end, we introduce MobileViT, a light-weight and general-purpose vision transformer for mobile devices. MobileViT presents a different perspective for the global processing of information with transformers. Our results show that MobileViT significantly outperforms CNNand ViT-based networks across different tasks and datasets. On the ImageNet-1k dataset, MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters, which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based)and DeIT (ViT-based) for a similar number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than MobileNetv3 for a similar number of parameters. Our source code is open-source and available at:https://github.com/apple/ml-cvnets</div></details></td>
Z
zhangyubo0722 已提交
1819 1820
        <td>ImageNet/Acc 0.7814</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/MobileViT.md">快速开始</a></td>
1821
        </td>
C
Chen Long 已提交
1822 1823
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1824 1825 1826
        <td>200</td>
        <td>ReID_strong_baseline</td>
        <td><a href="https://arxiv.org/abs/1906.08332">A Strong Baseline and Batch Normalization Neck for Deep Person Re-identification.</a></td>
1827
        <td><details><summary>Abstract</summary><div>This paper explores a simple and efficient baseline for person re-identification (ReID). Person re-identification (ReID) with deep neural networks has made progress and achieved high performance in recent years. However, many state-of-the-arts methods design complex network structure and concatenate multi-branch features. In the literature, some effective training tricks are briefly appeared in several papers or source codes. This paper will collect and evaluate these effective training tricks in person ReID. By combining these tricks together, the model achieves 94.5% rank-1 and 85.9% mAP on Market1501 with only using global features. Our codes and models are available at https://github.com/michuanhaohao/reid-strong-baseline</div></details></td>
Z
zhangyubo0722 已提交
1828 1829
        <td>Market1501/recall@1 0.8845</br></td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/algorithm_introduction/ReID.md">快速开始</a></td>
1830
        </td>
C
Chen Long 已提交
1831 1832
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1833 1834 1835
        <td>201</td>
        <td>ReID_softmax_triplet</td>
        <td><a href="https://arxiv.org/abs/1906.08332">A Strong Baseline and Batch Normalization Neck for Deep Person Re-identification.</a></td>
1836
        <td><details><summary>Abstract</summary><div>This paper explores a simple and efficient baseline for person re-identification (ReID). Person re-identification (ReID) with deep neural networks has made progress and achieved high performance in recent years. However, many state-of-the-arts methods design complex network structure and concatenate multi-branch features. In the literature, some effective training tricks are briefly appeared in several papers or source codes. This paper will collect and evaluate these effective training tricks in person ReID. By combining these tricks together, the model achieves 94.5% rank-1 and 85.9% mAP on Market1501 with only using global features. Our codes and models are available at https://github.com/michuanhaohao/reid-strong-baseline</div></details></td>
Z
zhangyubo0722 已提交
1837 1838
        <td>market1501/Recall1@ 0.9429</br></td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/algorithm_introduction/ReID.md">快速开始</a></td>
1839
        </td>
C
Chen Long 已提交
1840 1841
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1842 1843 1844
        <td>202</td>
        <td>ReID_softmax_triplet_with_</br>center</td>
        <td><a href="https://arxiv.org/abs/1906.08332">A Strong Baseline and Batch Normalization Neck for Deep Person Re-identification.</a></td>
1845
        <td><details><summary>Abstract</summary><div>This paper explores a simple and efficient baseline for person re-identification (ReID). Person re-identification (ReID) with deep neural networks has made progress and achieved high performance in recent years. However, many state-of-the-arts methods design complex network structure and concatenate multi-branch features. In the literature, some effective training tricks are briefly appeared in several papers or source codes. This paper will collect and evaluate these effective training tricks in person ReID. By combining these tricks together, the model achieves 94.5% rank-1 and 85.9% mAP on Market1501 with only using global features. Our codes and models are available at https://github.com/michuanhaohao/reid-strong-baseline</div></details></td>
Z
zhangyubo0722 已提交
1846 1847
        <td>market1501/Recall@1 0.945</br></td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/algorithm_introduction/ReID.md">快速开始</a></td>
1848
        </td>
C
Chen Long 已提交
1849 1850
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1851 1852 1853
        <td>203</td>
        <td>VAN_B0</td>
        <td><a href="https://arxiv.org/abs/2202.09741">Visual Attention Network</a></td>
1854
        <td><details><summary>Abstract</summary><div>While originally designed for natural language processing tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel linear attention named large kernel attention (LKA) to enable self-adaptive and long-range correlations in self-attention while avoiding its shortcomings. Furthermore, we present a neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple, VAN surpasses similar size vision transformers(ViTs) and convolutional neural networks(CNNs) in various tasks, including image classification, object detection, semantic segmentation, panoptic segmentation, pose estimation, etc. For example, VAN-B6 achieves 87.8% accuracy on ImageNet benchmark and set new state-of-the-art performance (58.2 PQ) for panoptic segmentation. Besides, VAN-B2 surpasses Swin-T 4% mIoU (50.1 vs. 46.1) for semantic segmentation on ADE20K benchmark, 2.6% AP (48.8 vs. 46.2) for object detection on COCO dataset. It provides a novel method and a simple yet strong baseline for the community. Code is available at https://github.com/Visual-Attention-Network.</div></details></td>
Z
zhangyubo0722 已提交
1855 1856
        <td>ImageNet/Acc 0.7535</br></td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/VAN.md">快速开始</a></td>
1857
        </td>
C
Chen Long 已提交
1858 1859
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1860
        <td>204</td>
1861
        <td>PeleeNet</td>
Z
zhangyubo0722 已提交
1862
        <td><a href="https://arxiv.org/abs/1804.06882">Pelee: A Real-Time Object Detection System on MobileDevices</a></td>
1863
        <td><details><summary>Abstract</summary><div>An increasing need of running Convolutional Neural Network (CNN) models onmobile devices with limited computing power and memory resource encouragesstudies on efficient model design. A number of efficient architectures have beenproposed in recent years, for example, MobileNet, ShuffleNet, and MobileNetV2.However, all these models are heavily dependent on depthwise separable convolution which lacks efficient implementation in most deep learning frameworks. Inthis study, we propose an efficient architecture named PeleeNet, which is built withconventional convolution instead. On ImageNet ILSVRC 2012 dataset, our proposed PeleeNet achieves a higher accuracy and over 1.8 times faster speed thanMobileNet and MobileNetV2 on NVIDIA TX2. Meanwhile, PeleeNet is only66% of the model size of MobileNet. We then propose a real-time object detection system by combining PeleeNet with Single Shot MultiBox Detector (SSD)method and optimizing the architecture for fast speed. Our proposed detectionsystem1, named Pelee, achieves 76.4% mAP (mean average precision) on PASCAL VOC2007 and 22.4 mAP on MS COCO dataset at the speed of 23.6 FPSon iPhone 8 and 125 FPS on NVIDIA TX2. The result on COCO outperformsYOLOv2 in consideration of a higher precision, 13.6 times lower computationalcost and 11.3 times smaller model size.</div></details></td>
Z
zhangyubo0722 已提交
1864 1865
        <td>ImageNet/Acc 0.7153</br></td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/PeleeNet.md">快速开始</a></td>
1866
        </td>
C
Chen Long 已提交
1867 1868
    </tr>
    <tr>
Z
zhangyubo0722 已提交
1869 1870 1871
        <td>205</td>
        <td>ConvNeXt_tiny </td>
        <td><a href="https://arxiv.org/abs/2201.03545">A ConvNet for the 2020s</a></td>
1872
        <td><details><summary>Abstract</summary><div>The "Roaring 20s" of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model. A vanilla ViT, on the other hand, faces difficulties when applied to general computer vision tasks such as object detection and semantic segmentation. It is the hierarchical Transformers (e.g., Swin Transformers) that reintroduced several ConvNet priors, making Transformers practically viable as a generic vision backbone and demonstrating remarkable performance on a wide variety of vision tasks. However, the effectiveness of such hybrid approaches is still largely credited to the intrinsic superiority of Transformers, rather than the inherent inductive biases of convolutions. In this work, we reexamine the design spaces and test the limits of what a pure ConvNet can achieve. We gradually "modernize" a standard ResNet toward the design of a vision Transformer, and discover several key components that contribute to the performance difference along the way. The outcome of this exploration is a family of pure ConvNet models dubbed ConvNeXt. Constructed entirely from standard ConvNet modules, ConvNeXts compete favorably with Transformers in terms of accuracy and scalability, achieving 87.8% ImageNet top-1 accuracy and outperforming Swin Transformers on COCO detection and ADE20K segmentation, while maintaining the simplicity and efficiency of standard ConvNets.</div></details></td>
Z
zhangyubo0722 已提交
1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991
        <td>ImageNet/Acc 0.8203</br></td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/ImageNet1k/ConvNeXt.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>206</td>
        <td>PULC_car_exists</td>
        <td><a href="https://arxiv.org/abs/2109.15099">PP-LCNet: A Lightweight CPU Convolutional Neural Network</a></td>
        <td><details><summary>Abstract</summary><div>We propose a lightweight CPU network based on theMKLDNN acceleration strategy, named PP-LCNet, whichimproves the performance of lightweight models on multi-ple tasks. This paper lists technologies which can improvenetwork accuracy while the latency is almost constant. Withthese improvements, the accuracy of PP-LCNet can greatlysurpass the previous network structure with the same infer-ence time for classification. As shown in Figure 1, it outper-forms the most state-of-the-art models. And for downstreamtasks of computer vision, it also performs very well, such asobject detection, semantic segmentation, etc. All our exper-iments are implemented based on PaddlePaddle1. Code andpretrained models are available at PaddleClas</div></details></td>
        <td>自建数据集/Tpr@Fpr0.01 0.9592</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/PULC/PULC_car_exists.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>207</td>
        <td>PULC_language_classification</td>
        <td><a href="https://arxiv.org/abs/2109.15099">PP-LCNet: A Lightweight CPU Convolutional Neural Network</a></td>
        <td><details><summary>Abstract</summary><div>We propose a lightweight CPU network based on theMKLDNN acceleration strategy, named PP-LCNet, whichimproves the performance of lightweight models on multi-ple tasks. This paper lists technologies which can improvenetwork accuracy while the latency is almost constant. Withthese improvements, the accuracy of PP-LCNet can greatlysurpass the previous network structure with the same infer-ence time for classification. As shown in Figure 1, it outper-forms the most state-of-the-art models. And for downstreamtasks of computer vision, it also performs very well, such asobject detection, semantic segmentation, etc. All our exper-iments are implemented based on PaddlePaddle1. Code andpretrained models are available at PaddleClas</div></details></td>
        <td>Multi-lingual scene text detection and recognition/Acc 0.9926</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/PULC/PULC_language_classification.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>208</td>
        <td>PULC_person_attribute</td>
        <td><a href="https://arxiv.org/abs/2109.15099">PP-LCNet: A Lightweight CPU Convolutional Neural Network</a></td>
        <td><details><summary>Abstract</summary><div>We propose a lightweight CPU network based on theMKLDNN acceleration strategy, named PP-LCNet, whichimproves the performance of lightweight models on multi-ple tasks. This paper lists technologies which can improvenetwork accuracy while the latency is almost constant. Withthese improvements, the accuracy of PP-LCNet can greatlysurpass the previous network structure with the same infer-ence time for classification. As shown in Figure 1, it outper-forms the most state-of-the-art models. And for downstreamtasks of computer vision, it also performs very well, such asobject detection, semantic segmentation, etc. All our exper-iments are implemented based on PaddlePaddle1. Code andpretrained models are available at PaddleClas</div></details></td>
        <td>pa100k/mA 0.7859</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/PULC/PULC_person_attribute.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>209</td>
        <td>PULC_person_exists</td>
        <td><a href="https://arxiv.org/abs/2109.15099">PP-LCNet: A Lightweight CPU Convolutional Neural Network</a></td>
        <td><details><summary>Abstract</summary><div>We propose a lightweight CPU network based on theMKLDNN acceleration strategy, named PP-LCNet, whichimproves the performance of lightweight models on multi-ple tasks. This paper lists technologies which can improvenetwork accuracy while the latency is almost constant. Withthese improvements, the accuracy of PP-LCNet can greatlysurpass the previous network structure with the same infer-ence time for classification. As shown in Figure 1, it outper-forms the most state-of-the-art models. And for downstreamtasks of computer vision, it also performs very well, such asobject detection, semantic segmentation, etc. All our exper-iments are implemented based on PaddlePaddle1. Code andpretrained models are available at PaddleClas</div></details></td>
        <td>自建数据集/Tpr@Fpr 0.9623</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/PULC/PULC_person_exists.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>210</td>
        <td>PULC_safety_helmet</td>
        <td><a href="https://arxiv.org/abs/2109.15099">PP-LCNet: A Lightweight CPU Convolutional Neural Network</a></td>
        <td><details><summary>Abstract</summary><div>We propose a lightweight CPU network based on theMKLDNN acceleration strategy, named PP-LCNet, whichimproves the performance of lightweight models on multi-ple tasks. This paper lists technologies which can improvenetwork accuracy while the latency is almost constant. Withthese improvements, the accuracy of PP-LCNet can greatlysurpass the previous network structure with the same infer-ence time for classification. As shown in Figure 1, it outper-forms the most state-of-the-art models. And for downstreamtasks of computer vision, it also performs very well, such asobject detection, semantic segmentation, etc. All our exper-iments are implemented based on PaddlePaddle1. Code andpretrained models are available at PaddleClas</div></details></td>
        <td>Safety-Helmet-Wearing-Dataset、hard-hat-detection、Large-scale CelebFaces Attributes (CelebA) Dataset/Tpr@Fpr 0.9938</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/PULC/PULC_safety_helmet.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>211</td>
        <td>PULC_text_image_orientation</td>
        <td><a href="https://arxiv.org/abs/2109.15099">PP-LCNet: A Lightweight CPU Convolutional Neural Network</a></td>
        <td><details><summary>Abstract</summary><div>We propose a lightweight CPU network based on theMKLDNN acceleration strategy, named PP-LCNet, whichimproves the performance of lightweight models on multi-ple tasks. This paper lists technologies which can improvenetwork accuracy while the latency is almost constant. Withthese improvements, the accuracy of PP-LCNet can greatlysurpass the previous network structure with the same infer-ence time for classification. As shown in Figure 1, it outper-forms the most state-of-the-art models. And for downstreamtasks of computer vision, it also performs very well, such asobject detection, semantic segmentation, etc. All our exper-iments are implemented based on PaddlePaddle1. Code andpretrained models are available at PaddleClas</div></details></td>
        <td>ICDAR2019-ArT、XFUND、ICDAR2015/Acc 0.9906</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/PULC/PULC_text_image_orientation.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>212</td>
        <td>PULC_textline_orientation</td>
        <td><a href="https://arxiv.org/abs/2109.15099">PP-LCNet: A Lightweight CPU Convolutional Neural Network</a></td>
        <td><details><summary>Abstract</summary><div>We propose a lightweight CPU network based on theMKLDNN acceleration strategy, named PP-LCNet, whichimproves the performance of lightweight models on multi-ple tasks. This paper lists technologies which can improvenetwork accuracy while the latency is almost constant. Withthese improvements, the accuracy of PP-LCNet can greatlysurpass the previous network structure with the same infer-ence time for classification. As shown in Figure 1, it outper-forms the most state-of-the-art models. And for downstreamtasks of computer vision, it also performs very well, such asobject detection, semantic segmentation, etc. All our exper-iments are implemented based on PaddlePaddle1. Code andpretrained models are available at PaddleClas</div></details></td>
        <td>ICDAR2019-LSVT/Acc 0.9601</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/PULC/PULC_textline_orientation.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>213</td>
        <td>PULC_traffic_sign</td>
        <td><a href="https://arxiv.org/abs/2109.15099">PP-LCNet: A Lightweight CPU Convolutional Neural Network</a></td>
        <td><details><summary>Abstract</summary><div>We propose a lightweight CPU network based on theMKLDNN acceleration strategy, named PP-LCNet, whichimproves the performance of lightweight models on multi-ple tasks. This paper lists technologies which can improvenetwork accuracy while the latency is almost constant. Withthese improvements, the accuracy of PP-LCNet can greatlysurpass the previous network structure with the same infer-ence time for classification. As shown in Figure 1, it outper-forms the most state-of-the-art models. And for downstreamtasks of computer vision, it also performs very well, such asobject detection, semantic segmentation, etc. All our exper-iments are implemented based on PaddlePaddle1. Code andpretrained models are available at PaddleClas</div></details></td>
        <td>Tsinghua-Tencent 100K/Acc 0.9835</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/PULC/PULC_traffic_sign.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>214</td>
        <td>PULC_vehicle_attribute</td>
        <td><a href="https://arxiv.org/abs/2109.15099">PP-LCNet: A Lightweight CPU Convolutional Neural Network</a></td>
        <td><details><summary>Abstract</summary><div>We propose a lightweight CPU network based on theMKLDNN acceleration strategy, named PP-LCNet, whichimproves the performance of lightweight models on multi-ple tasks. This paper lists technologies which can improvenetwork accuracy while the latency is almost constant. Withthese improvements, the accuracy of PP-LCNet can greatlysurpass the previous network structure with the same infer-ence time for classification. As shown in Figure 1, it outper-forms the most state-of-the-art models. And for downstreamtasks of computer vision, it also performs very well, such asobject detection, semantic segmentation, etc. All our exper-iments are implemented based on PaddlePaddle1. Code andpretrained models are available at PaddleClas</div></details></td>
        <td>VeRi/mA 0.9081</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/PULC/PULC_vehicle_attribute.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>215</td>
        <td>PP-ShiTuV1_mainbody_detection</td>
        <td><a href="https://arxiv.org/abs/2111.00775">PP-ShiTu: A Practical Lightweight Image Recognition System</a></td>
        <td><details><summary>Abstract</summary><div>In recent years, image recognition applications have developed rapidly. A large number of studies and techniques have emerged in different fields, such as face recognition, pedestrian and vehicle re-identification, landmark retrieval, and product recognition. In this paper, we propose a practical lightweight image recognition system, named PP-ShiTu, consisting of the following 3 modules, mainbody detection, feature extraction and vector search. We introduce popular strategies including metric learning, deep hash, knowledge distillation and model quantization to improve accuracy and inference speed. With strategies above, PP-ShiTu works well in different scenarios with a set of models trained on a mixed dataset. Experiments on different datasets and benchmarks show that the system is widely effective in different domains of image recognition. All the above mentioned models are open-sourced and the code is available in the GitHub repository PaddleClas on PaddlePaddle. </div></details></td>
        <td>自建数据集/mAP 0.401</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/image_recognition_pipeline/mainbody_detection.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>216</td>
        <td>PP-ShiTuV1_feature_extraction</td>
        <td><a href="https://arxiv.org/abs/2111.00775">PP-ShiTu: A Practical Lightweight Image Recognition System</a></td>
        <td><details><summary>Abstract</summary><div>In recent years, image recognition applications have developed rapidly. A large number of studies and techniques have emerged in different fields, such as face recognition, pedestrian and vehicle re-identification, landmark retrieval, and product recognition. In this paper, we propose a practical lightweight image recognition system, named PP-ShiTu, consisting of the following 3 modules, mainbody detection, feature extraction and vector search. We introduce popular strategies including metric learning, deep hash, knowledge distillation and model quantization to improve accuracy and inference speed. With strategies above, PP-ShiTu works well in different scenarios with a set of models trained on a mixed dataset. Experiments on different datasets and benchmarks show that the system is widely effective in different domains of image recognition. All the above mentioned models are open-sourced and the code is available in the GitHub repository PaddleClas on PaddlePaddle. </div></details></td>
        <td>Aliproduct/Recall@1 0.839</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/image_recognition_pipeline/feature_extraction.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>217</td>
        <td>PP-ShiTuV2_mainbody_detection</td>
        <td><a href="https://arxiv.org/abs/2111.00902">PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices</a></td>
        <td><details><summary>Abstract</summary><div>The better accuracy and efficiency trade-off has been a challenging problem in object detection. In this work, we are dedicated to studying key optimizations and neural network architecture choices for object detection to improve accuracy and efficiency. We investigate the applicability of the anchor-free strategy on lightweight object detection models. We enhance the backbone structure and design the lightweight structure of the neck, which improves the feature extraction ability of the network. We improve label assignment strategy and loss function to make training more stable and efficient. Through these optimizations, we create a new family of real-time object detectors, named PP-PicoDet, which achieves superior performance on object detection for mobile devices. Our models achieve better trade-offs between accuracy and latency compared to other popular models. PicoDet-S with only 0.99M parameters achieves 30.6% mAP, which is an absolute 4.8% improvement in mAP while reducing mobile CPU inference latency by 55% compared to YOLOX-Nano, and is an absolute 7.1% improvement in mAP compared to NanoDet. It reaches 123 FPS (150 FPS using Paddle Lite) on mobile ARM CPU when the input size is 320. PicoDet-L with only 3.3M parameters achieves 40.9% mAP, which is an absolute 3.7% improvement in mAP and 44% faster than YOLOv5s. As shown in Figure 1, our models far outperform the state-of-the-art results for lightweight object detection. Code and pre-trained models are available at this https URL. </div></details></td>
        <td>自建数据集/mAP 0.415</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/training/PP-ShiTu/mainbody_detection.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>218</td>
        <td>PP-ShiTuV2_feature_extraction</td>
        <td><a href="https://arxiv.org/abs/2109.15099">PP-LCNet: A Lightweight CPU Convolutional Neural Network</a></td>
        <td><details><summary>Abstract</summary><div>We propose a lightweight CPU network based on the MKLDNN acceleration strategy, named PP-LCNet, which improves the performance of lightweight models on multiple tasks. This paper lists technologies which can improve network accuracy while the latency is almost constant. With these improvements, the accuracy of PP-LCNet can greatly surpass the previous network structure with the same inference time for classification. As shown in Figure 1, it outperforms the most state-of-the-art models. And for downstream tasks of computer vision, it also performs very well, such as object detection, semantic segmentation, etc. All our experiments are implemented based on PaddlePaddle. Code and pretrained models are available at PaddleClas. </div></details></td>
        <td>Aliproduct/Recall@1 0.842</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/training/PP-ShiTu/feature_extraction.md">快速开始</a></td>
1992
        </td>
C
Chen Long 已提交
1993
    </tr>
1994 1995 1996 1997
</table>

### PaddleDetection
<table>
C
Chen Long 已提交
1998
    <tr>
1999 2000 2001 2002 2003 2004 2005
        <th>序号</th>
        <th>模型简称</th>
        <th>论文名称(链接)</th>
        <th>摘要</th>
        <th>数据集</th>
        <th width='10%'>快速开始</th>
        </th>
C
Chen Long 已提交
2006 2007
    </tr>
    <tr>
2008 2009 2010 2011 2012 2013 2014
        <td>1</td>
        <td>ppyolo_tiny_650e_coco</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection">PP-YOLO: An Effective and Efficient Implementation of Object Detector</a></td>
        <td><details><summary>Abstract</summary><div>Object detection is one of the most important areas in computer vision, which plays a key role in various practical scenarios. Due to limitation of hardware, it is often necessary to sacrifice accuracy to ensure the infer speed of the detector in practice. Therefore, the balance between effectiveness and efficiency of object detector must be considered. The goal of this paper is to implement an object detector with relatively balanced effectiveness and efficiency that can be directly applied in actual application scenarios, rather than propose a novel detection model. Considering that YOLOv3 has been widely used in practice, we develop a new object detector based on YOLOv3. We mainly try to combine various existing tricks that almost not increase the number of model parameters and FLOPs, to achieve the goal of improving the accuracy of detector as much as possible while ensuring that the speed is almost unchanged. Since all experiments in this paper are conducted based on PaddlePaddle, we call it PP-YOLO. By combining multiple tricks, PP-YOLO can achieve a better balance between effectiveness (45.2% mAP) and efficiency (72.9 FPS), surpassing the existing state-of-the-art detectors such as EfficientDet and YOLOv4.Source code is at this https URL.</div></details></td>
        <td>COCO/mAP 20.6</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2015 2016
    </tr>
    <tr>
2017 2018 2019 2020 2021 2022 2023
        <td>2</td>
        <td>picodet_s_320_coco</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection">PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices</a></td>
        <td><details><summary>Abstract</summary><div>The better accuracy and efficiency trade-off has been a challenging problem in object detection. In this work, we are dedicated to studying key optimizations and neural network architecture choices for object detection to improve accuracy and efficiency. We investigate the applicability of the anchor-free strategy on lightweight object detection models. We enhance the backbone structure and design the lightweight structure of the neck, which improves the feature extraction ability of the network. We improve label assignment strategy and loss function to make training more stable and efficient. Through these optimizations, we create a new family of real-time object detectors, named PP-PicoDet, which achieves superior performance on object detection for mobile devices. Our models achieve better trade-offs between accuracy and latency compared to other popular models. PicoDet-S with only 0.99M parameters achieves 30.6% mAP, which is an absolute 4.8% improvement in mAP while reducing mobile CPU inference latency by 55% compared to YOLOX-Nano, and is an absolute 7.1% improvement in mAP compared to NanoDet. It reaches 123 FPS (150 FPS using Paddle Lite) on mobile ARM CPU when the input size is 320. PicoDet-L with only 3.3M parameters achieves 40.9% mAP, which is an absolute 3.7% improvement in mAP and 44% faster than YOLOv5s. As shown in Figure 1, our models far outperform the state-of-the-art results for lightweight object detection. Code and pre-trained models are available at this https URL.</div></details></td>
        <td>COCO/mAP 27.1</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2024 2025
    </tr>
    <tr>
2026 2027 2028 2029 2030 2031 2032
        <td>3</td>
        <td>picodet_s_320_coco_lc</br>net</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection">PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices</a></td>
        <td><details><summary>Abstract</summary><div>The better accuracy and efficiency trade-off has been a challenging problem in object detection. In this work, we are dedicated to studying key optimizations and neural network architecture choices for object detection to improve accuracy and efficiency. We investigate the applicability of the anchor-free strategy on lightweight object detection models. We enhance the backbone structure and design the lightweight structure of the neck, which improves the feature extraction ability of the network. We improve label assignment strategy and loss function to make training more stable and efficient. Through these optimizations, we create a new family of real-time object detectors, named PP-PicoDet, which achieves superior performance on object detection for mobile devices. Our models achieve better trade-offs between accuracy and latency compared to other popular models. PicoDet-S with only 0.99M parameters achieves 30.6% mAP, which is an absolute 4.8% improvement in mAP while reducing mobile CPU inference latency by 55% compared to YOLOX-Nano, and is an absolute 7.1% improvement in mAP compared to NanoDet. It reaches 123 FPS (150 FPS using Paddle Lite) on mobile ARM CPU when the input size is 320. PicoDet-L with only 3.3M parameters achieves 40.9% mAP, which is an absolute 3.7% improvement in mAP and 44% faster than YOLOv5s. As shown in Figure 1, our models far outperform the state-of-the-art results for lightweight object detection. Code and pre-trained models are available at this https URL.</div></details></td>
        <td>COCO/mAP 40.9</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2033 2034
    </tr>
    <tr>
2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064
        <td>4</td>
        <td>picodet_lcnet_1_5x_41</br>6_coco</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection">PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices</a></td>
        <td><details><summary>Abstract</summary><div>The better accuracy and efficiency trade-off has been a challenging problem in object detection. In this work, we are dedicated to studying key optimizations and neural network architecture choices for object detection to improve accuracy and efficiency. We investigate the applicability of the anchor-free strategy on lightweight object detection models. We enhance the backbone structure and design the lightweight structure of the neck, which improves the feature extraction ability of the network. We improve label assignment strategy and loss function to make training more stable and efficient. Through these optimizations, we create a new family of real-time object detectors, named PP-PicoDet, which achieves superior performance on object detection for mobile devices. Our models achieve better trade-offs between accuracy and latency compared to other popular models. PicoDet-S with only 0.99M parameters achieves 30.6% mAP, which is an absolute 4.8% improvement in mAP while reducing mobile CPU inference latency by 55% compared to YOLOX-Nano, and is an absolute 7.1% improvement in mAP compared to NanoDet. It reaches 123 FPS (150 FPS using Paddle Lite) on mobile ARM CPU when the input size is 320. PicoDet-L with only 3.3M parameters achieves 40.9% mAP, which is an absolute 3.7% improvement in mAP and 44% faster than YOLOv5s. As shown in Figure 1, our models far outperform the state-of-the-art results for lightweight object detection. Code and pre-trained models are available at this https URL.</div></details></td>
        <td>COCO/mAP 36.3</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>5</td>
        <td>ppyoloe_crn_s_300e_co</br>co</td>
        <td><a href="https://arxiv.org/abs/2203.16250">PP-YOLOE: An evolved version of YOLO</a></td>
        <td><details><summary>Abstract</summary><div>In this report, we present PP-YOLOE, an industrial state-of-the-art object detector with high performance and friendly deployment. We optimize on the basis of the previous PP-YOLOv2, using anchor-free paradigm, more powerful backbone and neck equipped with CSPRepResStage, ET-head and dynamic label assignment algorithm TAL. We provide s/m/l/x models for different practice scenarios. As a result, PP-YOLOE-l achieves 51.4 mAP on COCO test-dev and 78.1 FPS on Tesla V100, yielding a remarkable improvement of (+1.9 AP, +13.35% speed up) and (+1.3 AP, +24.96% speed up), compared to the previous state-of-the-art industrial models PP-YOLOv2 and YOLOX respectively. Further, PP-YOLOE inference speed achieves 149.2 FPS with TensorRT and FP16-precision. We also conduct extensive experiments to verify the effectiveness of our designs. Source code and pre-trained models are available at this https URL.</div></details></td>
        <td>COCO/mAP 48.9</td>
        <td><a href="无">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>6</td>
        <td>ssdlite_mobilenet_v1_</br>300_coco</td>
        <td><a href="https://github.com/weiliu89/caffe/tree/ssd">SSD: Single Shot MultiBox Detector</a></td>
        <td><details><summary>Abstract</summary><div>We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. Our SSD model is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stage and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets confirm that SSD has comparable accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. Compared to other single stage methods, SSD has much better accuracy, even with a smaller input image size. For 300\times 300 input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan X and for 500\times 500 input, SSD achieves 75.1% mAP, outperforming a comparable state of the art Faster R-CNN model. Code is available at this https URL .</div></details></td>
        <td>暂无</td>
        <td><a href="无">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>7</td>
        <td>faster_rcnn_r50_fpn_1</br>x_coco</td>
        <td><a href="无">Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks</a></td>
2065
        <td><details><summary>Abstract</summary><div>State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.</div></details></td>
2066 2067 2068
        <td>COCO/mAP 38.4</td>
        <td><a href="无">快速开始</a></td>
        </td>
C
Chen Long 已提交
2069 2070
    </tr>
    <tr>
2071 2072 2073
        <td>8</td>
        <td>faster_rcnn_swin_tiny</br>_fpn_1x_coco</td>
        <td><a href="无">Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks</a></td>
2074
        <td><details><summary>Abstract</summary><div>State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.</div></details></td>
2075 2076 2077
        <td>COCO/mAP 42.6</td>
        <td><a href="无">快速开始</a></td>
        </td>
C
Chen Long 已提交
2078 2079
    </tr>
    <tr>
2080 2081 2082
        <td>9</td>
        <td>faster_rcnn_r50_1x_co</br>co</td>
        <td><a href="无">Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks</a></td>
2083
        <td><details><summary>Abstract</summary><div>State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.</div></details></td>
2084 2085 2086
        <td>COCO/mAP 36.7</td>
        <td><a href="无">快速开始</a></td>
        </td>
C
Chen Long 已提交
2087 2088
    </tr>
    <tr>
2089
        <td>10</td>
2090
        <td>fcos_r50_fpn_1x_coco</td>
2091
        <td><a href="https://github.com/tianzhi0549/FCOS/">FCOS: Fully Convolutional One-Stage Object Detection</a></td>
2092 2093
        <td><details><summary>Abstract</summary><div>We propose a fully convolutional one-stage object detector (FCOS) to solve object detection in a per-pixel prediction fashion, analogue to semantic segmentation. Almost all state-of-the-art object detectors such as RetinaNet, SSD, YOLOv3, and Faster R-CNN rely on pre-defined anchor boxes. In contrast, our proposed detector FCOS is anchor box free, as well as proposal free. By eliminating the predefined set of anchor boxes, FCOS completely avoids the complicated computation related to anchor boxes such as calculating overlapping during training. More importantly, we also avoid all hyper-parameters related to anchor boxes, which are often very sensitive to the final detection performance. With the only post-processing non-maximum suppression (NMS), FCOS with ResNeXt-64x4d-101 achieves 44.7% in AP with single-model and single-scale testing, surpassing previous one-stage detectors with the advantage of being much simpler. For the first time, we demonstrate a much simpler and flexible detection framework achieving improved detection accuracy. We hope that the proposed FCOS framework can serve as a simple and strong alternative for many other instance-level tasks. Code is available at:Code is available at: this https URL</div></details></td>
        <td>COCO/mAP 39.6</td>
2094 2095
        <td><a href="无">快速开始</a></td>
        </td>
C
Chen Long 已提交
2096 2097
    </tr>
    <tr>
2098
        <td>11</td>
2099
        <td>yolov3_mobilenet_v1_2</br>70e_coco</td>
2100
        <td><a href="https://pjreddie.com/darknet/yolo/">YOLOv3: An Incremental Improvement</a></td>
2101 2102
        <td><details><summary>Abstract</summary><div>We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL</div></details></td>
        <td>COCO/mAP 29.4</td>
2103 2104
        <td><a href="无">快速开始</a></td>
        </td>
C
Chen Long 已提交
2105 2106
    </tr>
    <tr>
2107
        <td>12</td>
2108
        <td>ttfnet_darknet53_1x_c</br>oco</td>
2109
        <td><a href="https://github.com/ZJULearning/ttfnet">Training-Time-Friendly Network for Real-Time Object Detection</a></td>
2110 2111
        <td><details><summary>Abstract</summary><div>Modern object detectors can rarely achieve short training time, fast inference speed, and high accuracy at the same time. To strike a balance among them, we propose the Training-Time-Friendly Network (TTFNet). In this work, we start with light-head, single-stage, and anchor-free designs, which enable fast inference speed. Then, we focus on shortening training time. We notice that encoding more training samples from annotated boxes plays a similar role as increasing batch size, which helps enlarge the learning rate and accelerate the training process. To this end, we introduce a novel approach using Gaussian kernels to encode training samples. Besides, we design the initiative sample weights for better information utilization. Experiments on MS COCO show that our TTFNet has great advantages in balancing training time, inference speed, and accuracy. It has reduced training time by more than seven times compared to previous real-time detectors while maintaining state-of-the-art performances. In addition, our super-fast version of TTFNet-18 and TTFNet-53 can outperform SSD300 and YOLOv3 by less than one-tenth of their training time, respectively. The code has been made available at \url{this https URL}.</div></details></td>
        <td>COCO/mAP 33.5</td>
2112 2113
        <td><a href="无">快速开始</a></td>
        </td>
C
Chen Long 已提交
2114 2115
    </tr>
    <tr>
2116
        <td>13</td>
2117
        <td>cascade_rcnn_r50_fpn_</br>1x_coco</td>
2118
        <td><a href="https://github.com/zhaoweicai/cascade-rcnn">Cascade R-CNN: Delving into High Quality Object Detection</a></td>
2119 2120
        <td><details><summary>Abstract</summary><div>In object detection, an intersection over union (IoU) threshold is required to define positives and negatives. An object detector, trained with low IoU threshold, e.g. 0.5, usually produces noisy detections. However, detection performance tends to degrade with increasing the IoU thresholds. Two main factors are responsible for this: 1) overfitting during training, due to exponentially vanishing positive samples, and 2) inference-time mismatch between the IoUs for which the detector is optimal and those of the input hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, is proposed to address these problems. It consists of a sequence of detectors trained with increasing IoU thresholds, to be sequentially more selective against close false positives. The detectors are trained stage by stage, leveraging the observation that the output of a detector is a good distribution for training the next higher quality detector. The resampling of progressively improved hypotheses guarantees that all detectors have a positive set of examples of equivalent size, reducing the overfitting problem. The same cascade procedure is applied at inference, enabling a closer match between the hypotheses and the detector quality of each stage. A simple implementation of the Cascade R-CNN is shown to surpass all single-model object detectors on the challenging COCO dataset. Experiments also show that the Cascade R-CNN is widely applicable across detector architectures, achieving consistent gains independently of the baseline detector strength. The code will be made available at this https URL.</div></details></td>
        <td>COCO/mAP 41.1</td>
2121 2122
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2123 2124
    </tr>
    <tr>
2125
        <td>14</td>
2126
        <td>cascade_mask_rcnn_r50</br>_fpn_1x_coco</td>
2127
        <td><a href="https://github.com/zhaoweicai/Detectron-Cascade-RCNN">Cascade R-CNN: High Quality Object Detection and Instance Segmentation</a></td>
2128 2129
        <td><details><summary>Abstract</summary><div>In object detection, the intersection over union (IoU) threshold is frequently used to define positives/negatives. The threshold used to train a detector defines its \textit{quality}. While the commonly used threshold of 0.5 leads to noisy (low-quality) detections, detection performance frequently degrades for larger thresholds. This paradox of high-quality detection has two causes: 1) overfitting, due to vanishing positive samples for large thresholds, and 2) inference-time quality mismatch between detector and test hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, composed of a sequence of detectors trained with increasing IoU thresholds, is proposed to address these problems. The detectors are trained sequentially, using the output of a detector as training set for the next. This resampling progressively improves hypotheses quality, guaranteeing a positive training set of equivalent size for all detectors and minimizing overfitting. The same cascade is applied at inference, to eliminate quality mismatches between hypotheses and detectors. An implementation of the Cascade R-CNN without bells or whistles achieves state-of-the-art performance on the COCO dataset, and significantly improves high-quality detection on generic and specific object detection datasets, including VOC, KITTI, CityPerson, and WiderFace. Finally, the Cascade R-CNN is generalized to instance segmentation, with nontrivial improvements over the Mask R-CNN. To facilitate future research, two implementations are made available at \url{this https URL} (Caffe) and \url{this https URL} (Detectron).</div></details></td>
        <td>COCO/mAP 44.9</td>
2130 2131
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2132 2133
    </tr>
    <tr>
2134
        <td>15</td>
2135
        <td>blazeface_1000e</td>
2136
        <td><a href="无">BlazeFace: Sub-millisecond Neural Face Detection on Mobile GPUs</a></td>
2137 2138
        <td><details><summary>Abstract</summary><div>We present BlazeFace, a lightweight and well-performing face detector tailored for mobile GPU inference. It runs at a speed of 200-1000+ FPS on flagship devices. This super-realtime performance enables it to be applied to any augmented reality pipeline that requires an accurate facial region of interest as an input for task-specific models, such as 2D/3D facial keypoint or geometry estimation, facial features or expression classification, and face region segmentation. Our contributions include a lightweight feature extraction network inspired by, but distinct from MobileNetV1/V2, a GPU-friendly anchor scheme modified from Single Shot MultiBox Detector (SSD), and an improved tie resolution strategy alternative to non-maximum suppression.</div></details></td>
        <td>wider face/0.885 / 0.</br>855 / 0.731</td>
2139 2140
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2141 2142
    </tr>
    <tr>
2143
        <td>16</td>
2144
        <td>s2anet_conv_2x_spine</td>
2145
        <td><a href="https://github.com/csuhan/s2anet">Align Deep Features for Oriented Object Detection</a></td>
2146 2147
        <td><details><summary>Abstract</summary><div>The past decade has witnessed significant progress on detecting objects in aerial images that are often distributed with large scale variations and arbitrary orientations. However most of existing methods rely on heuristically defined anchors with different scales, angles and aspect ratios and usually suffer from severe misalignment between anchor boxes and axis-aligned convolutional features, which leads to the common inconsistency between the classification score and localization accuracy. To address this issue, we propose a Single-shot Alignment Network (S2A-Net) consisting of two modules: a Feature Alignment Module (FAM) and an Oriented Detection Module (ODM). The FAM can generate high-quality anchors with an Anchor Refinement Network and adaptively align the convolutional features according to the anchor boxes with a novel Alignment Convolution. The ODM first adopts active rotating filters to encode the orientation information and then produces orientation-sensitive and orientation-invariant features to alleviate the inconsistency between classification score and localization accuracy. Besides, we further explore the approach to detect objects in large-size images, which leads to a better trade-off between speed and accuracy. Extensive experiments demonstrate that our method can achieve state-of-the-art performance on two commonly used aerial objects datasets (i.e., DOTA and HRSC2016) while keeping high efficiency. The code is available at this https URL.</div></details></td>
        <td>dota mAP 71.42</td>
2148 2149
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2150 2151
    </tr>
    <tr>
2152
        <td>17</td>
2153
        <td>s2anet_alignconv_2x_s</br>pine</td>
2154
        <td><a href="https://github.com/csuhan/s2anet">Align Deep Features for Oriented Object Detection</a></td>
2155 2156
        <td><details><summary>Abstract</summary><div>The past decade has witnessed significant progress on detecting objects in aerial images that are often distributed with large scale variations and arbitrary orientations. However most of existing methods rely on heuristically defined anchors with different scales, angles and aspect ratios and usually suffer from severe misalignment between anchor boxes and axis-aligned convolutional features, which leads to the common inconsistency between the classification score and localization accuracy. To address this issue, we propose a Single-shot Alignment Network (S2A-Net) consisting of two modules: a Feature Alignment Module (FAM) and an Oriented Detection Module (ODM). The FAM can generate high-quality anchors with an Anchor Refinement Network and adaptively align the convolutional features according to the anchor boxes with a novel Alignment Convolution. The ODM first adopts active rotating filters to encode the orientation information and then produces orientation-sensitive and orientation-invariant features to alleviate the inconsistency between classification score and localization accuracy. Besides, we further explore the approach to detect objects in large-size images, which leads to a better trade-off between speed and accuracy. Extensive experiments demonstrate that our method can achieve state-of-the-art performance on two commonly used aerial objects datasets (i.e., DOTA and HRSC2016) while keeping high efficiency. The code is available at this https URL.</div></details></td>
        <td>COCO/mAP 74</td>
2157 2158
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2159 2160
    </tr>
    <tr>
2161
        <td>18</td>
2162
        <td>s2anet_1x_spine</td>
2163
        <td><a href="https://github.com/csuhan/s2anet">Align Deep Features for Oriented Object Detection</a></td>
2164
        <td><details><summary>Abstract</summary><div>The past decade has witnessed significant progress on detecting objects in aerial images that are often distributed with large scale variations and arbitrary orientations. However most of existing methods rely on heuristically defined anchors with different scales, angles and aspect ratios and usually suffer from severe misalignment between anchor boxes and axis-aligned convolutional features, which leads to the common inconsistency between the classification score and localization accuracy. To address this issue, we propose a Single-shot Alignment Network (S2A-Net) consisting of two modules: a Feature Alignment Module (FAM) and an Oriented Detection Module (ODM). The FAM can generate high-quality anchors with an Anchor Refinement Network and adaptively align the convolutional features according to the anchor boxes with a novel Alignment Convolution. The ODM first adopts active rotating filters to encode the orientation information and then produces orientation-sensitive and orientation-invariant features to alleviate the inconsistency between classification score and localization accuracy. Besides, we further explore the approach to detect objects in large-size images, which leads to a better trade-off between speed and accuracy. Extensive experiments demonstrate that our method can achieve state-of-the-art performance on two commonly used aerial objects datasets (i.e., DOTA and HRSC2016) while keeping high efficiency. The code is available at this https URL.</div></details></td>
2165 2166 2167
        <td>暂无</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2168 2169
    </tr>
    <tr>
2170
        <td>19</td>
2171
        <td>solov2_r50_fpn_1x_coc</br>o</td>
2172
        <td><a href="https://git.io/AdelaiDet">SOLOv2: Dynamic, Faster and Stronger</a></td>
2173 2174
        <td><details><summary>Abstract</summary><div>In this work, we aim at building a simple, direct, and fast instance segmentation framework with strong performance. We follow the principle of the SOLO method of Wang et al. "SOLO: segmenting objects by locations". Importantly, we take one step further by dynamically learning the mask head of the object segmenter such that the mask head is conditioned on the location. Specifically, the mask branch is decoupled into a mask kernel branch and mask feature branch, which are responsible for learning the convolution kernel and the convolved features respectively. Moreover, we propose Matrix NMS (non maximum suppression) to significantly reduce the inference time overhead due to NMS of masks. Our Matrix NMS performs NMS with parallel matrix operations in one shot, and yields better results. We demonstrate a simple direct instance segmentation system, outperforming a few state-of-the-art methods in both speed and accuracy. A light-weight version of SOLOv2 executes at 31.3 FPS and yields 37.1% AP. Moreover, our state-of-the-art results in object detection (from our mask byproduct) and panoptic segmentation show the potential to serve as a new strong baseline for many instance-level recognition tasks besides instance segmentation. Code is available at: this https URL</div></details></td>
        <td>COCO/mAP 34.8</td>
2175 2176
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2177 2178
    </tr>
    <tr>
2179
        <td>20</td>
2180
        <td>solov2_r50_enhance_co</br>co</td>
2181
        <td><a href="https://git.io/AdelaiDet">SOLOv2: Dynamic, Faster and Stronger</a></td>
2182 2183
        <td><details><summary>Abstract</summary><div>In this work, we aim at building a simple, direct, and fast instance segmentation framework with strong performance. We follow the principle of the SOLO method of Wang et al. "SOLO: segmenting objects by locations". Importantly, we take one step further by dynamically learning the mask head of the object segmenter such that the mask head is conditioned on the location. Specifically, the mask branch is decoupled into a mask kernel branch and mask feature branch, which are responsible for learning the convolution kernel and the convolved features respectively. Moreover, we propose Matrix NMS (non maximum suppression) to significantly reduce the inference time overhead due to NMS of masks. Our Matrix NMS performs NMS with parallel matrix operations in one shot, and yields better results. We demonstrate a simple direct instance segmentation system, outperforming a few state-of-the-art methods in both speed and accuracy. A light-weight version of SOLOv2 executes at 31.3 FPS and yields 37.1% AP. Moreover, our state-of-the-art results in object detection (from our mask byproduct) and panoptic segmentation show the potential to serve as a new strong baseline for many instance-level recognition tasks besides instance segmentation. Code is available at: this https URL</div></details></td>
        <td>COCO/mAP 39</td>
2184 2185
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2186 2187
    </tr>
    <tr>
2188
        <td>21</td>
2189 2190 2191 2192
        <td>mask_rcnn_r50_fpn_1x_</br>coco</td>
        <td><a href="https://github.com/facebookresearch/Detectron">Mask R-CNN</a></td>
        <td><details><summary>Abstract</summary><div>We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, e.g., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without bells and whistles, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code has been made available at: this https URL</div></details></td>
        <td>COCO/mAP 39.2</td>
2193 2194
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2195 2196
    </tr>
    <tr>
2197
        <td>22</td>
2198 2199 2200 2201
        <td>mask_rcnn_r50_1x_coco</td>
        <td><a href="https://github.com/facebookresearch/Detectron">Mask R-CNN</a></td>
        <td><details><summary>Abstract</summary><div>We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, e.g., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without bells and whistles, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code has been made available at: this https URL</div></details></td>
        <td>COCO/mAP 37.4</td>
2202 2203
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2204 2205
    </tr>
    <tr>
2206
        <td>23</td>
2207
        <td>hrnet_w32_256x192</td>
2208
        <td><a href="https://github.com/leoxiaobin/deep-high-resolution-net.pytorch">Deep High-Resolution Representation Learning for Human Pose Estimation</a></td>
2209 2210
        <td><details><summary>Abstract</summary><div>This is an official pytorch implementation of Deep High-Resolution Representation Learning for Human Pose Estimation. In this work, we are interested in the human pose estimation problem with a focus on learning reliable high-resolution representations. Most existing methods recover high-resolution representations from low-resolution representations produced by a high-to-low resolution network. Instead, our proposed network maintains high-resolution representations through the whole process. We start from a high-resolution subnetwork as the first stage, gradually add high-to-low resolution subnetworks one by one to form more stages, and connect the mutli-resolution subnetworks in parallel. We conduct repeated multi-scale fusions such that each of the high-to-low resolution representations receives information from other parallel representations over and over, leading to rich high-resolution representations. As a result, the predicted keypoint heatmap is potentially more accurate and spatially more precise. We empirically demonstrate the effectiveness of our network through the superior pose estimation results over two benchmark datasets: the COCO keypoint detection dataset and the MPII Human Pose dataset. The code and models have been publicly available at \url{this https URL}.</div></details></td>
        <td>COCO/mAP 76.9</td>
2211 2212
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2213 2214
    </tr>
    <tr>
2215
        <td>24</td>
2216
        <td>dark_hrnet_w32_256x19</br>2</td>
2217
        <td><a href="https://github.com/leoxiaobin/deep-high-resolution-net.pytorch">Deep High-Resolution Representation Learning for Human Pose Estimation</a></td>
2218 2219
        <td><details><summary>Abstract</summary><div>This is an official pytorch implementation of Deep High-Resolution Representation Learning for Human Pose Estimation. In this work, we are interested in the human pose estimation problem with a focus on learning reliable high-resolution representations. Most existing methods recover high-resolution representations from low-resolution representations produced by a high-to-low resolution network. Instead, our proposed network maintains high-resolution representations through the whole process. We start from a high-resolution subnetwork as the first stage, gradually add high-to-low resolution subnetworks one by one to form more stages, and connect the mutli-resolution subnetworks in parallel. We conduct repeated multi-scale fusions such that each of the high-to-low resolution representations receives information from other parallel representations over and over, leading to rich high-resolution representations. As a result, the predicted keypoint heatmap is potentially more accurate and spatially more precise. We empirically demonstrate the effectiveness of our network through the superior pose estimation results over two benchmark datasets: the COCO keypoint detection dataset and the MPII Human Pose dataset. The code and models have been publicly available at \url{this https URL}.</div></details></td>
        <td>COCO/mAP 78</td>
2220 2221
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2222 2223
    </tr>
    <tr>
2224
        <td>25</td>
2225
        <td>higherhrnet_hrnet_w32</br>_512</td>
2226
        <td><a href="https://github.com/HRNet/HigherHRNet-Human-Pose-Estimation">HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Estimation</a></td>
2227 2228
        <td><details><summary>Abstract</summary><div>Bottom-up human pose estimation methods have difficulties in predicting the correct pose for small persons due to challenges in scale variation. In this paper, we present HigherHRNet: a novel bottom-up human pose estimation method for learning scale-aware representations using high-resolution feature pyramids. Equipped with multi-resolution supervision for training and multi-resolution aggregation for inference, the proposed approach is able to solve the scale variation challenge in bottom-up multi-person pose estimation and localize keypoints more precisely, especially for small person. The feature pyramid in HigherHRNet consists of feature map outputs from HRNet and upsampled higher-resolution outputs through a transposed convolution. HigherHRNet outperforms the previous best bottom-up method by 2.5% AP for medium person on COCO test-dev, showing its effectiveness in handling scale variation. Furthermore, HigherHRNet achieves new state-of-the-art result on COCO test-dev (70.5% AP) without using refinement or other post-processing techniques, surpassing all existing bottom-up methods. HigherHRNet even surpasses all top-down methods on CrowdPose test (67.6% AP), suggesting its robustness in crowded scene. The code and models are available at this https URL.</div></details></td>
        <td>COCO/mAP 67.1</td>
2229 2230
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2231 2232
    </tr>
    <tr>
2233 2234 2235
        <td>26</td>
        <td>fairmot_dla34_30e_108</br>8x608</td>
        <td><a href="https://github.com/ifzhang/FairMOT">FairMOT: On the Fairness of Detection and Re-Identification in Multiple Object Tracking</a></td>
2236 2237
        <td><details><summary>Abstract</summary><div>Multi-object tracking (MOT) is an important problem in computer vision which has a wide range of applications. Formulating MOT as multi-task learning of object detection and re-ID in a single network is appealing since it allows joint optimization of the two tasks and enjoys high computation efficiency. However, we find that the two tasks tend to compete with each other which need to be carefully addressed. In particular, previous works usually treat re-ID as a secondary task whose accuracy is heavily affected by the primary detection task. As a result, the network is biased to the primary detection task which is not fair to the re-ID task. To solve the problem, we present a simple yet effective approach termed as FairMOT based on the anchor-free object detection architecture CenterNet. Note that it is not a naive combination of CenterNet and re-ID. Instead, we present a bunch of detailed designs which are critical to achieve good tracking results by thorough empirical studies. The resulting approach achieves high accuracy for both detection and tracking. The approach outperforms the state-of-the-art methods by a large margin on several public datasets. The source code and pre-trained models are released at this https URL.</div></details></td>
        <td>MOT/mota/83.3</td>
2238 2239
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2240 2241
    </tr>
    <tr>
2242
        <td>27</td>
2243
        <td>fairmot_hrnetv2_w18_d</br>lafpn_30e_576x320</td>
2244
        <td><a href="https://github.com/ifzhang/FairMOT">FairMOT: On the Fairness of Detection and Re-Identification in Multiple Object Tracking</a></td>
2245 2246
        <td><details><summary>Abstract</summary><div>Multi-object tracking (MOT) is an important problem in computer vision which has a wide range of applications. Formulating MOT as multi-task learning of object detection and re-ID in a single network is appealing since it allows joint optimization of the two tasks and enjoys high computation efficiency. However, we find that the two tasks tend to compete with each other which need to be carefully addressed. In particular, previous works usually treat re-ID as a secondary task whose accuracy is heavily affected by the primary detection task. As a result, the network is biased to the primary detection task which is not fair to the re-ID task. To solve the problem, we present a simple yet effective approach termed as FairMOT based on the anchor-free object detection architecture CenterNet. Note that it is not a naive combination of CenterNet and re-ID. Instead, we present a bunch of detailed designs which are critical to achieve good tracking results by thorough empirical studies. The resulting approach achieves high accuracy for both detection and tracking. The approach outperforms the state-of-the-art methods by a large margin on several public datasets. The source code and pre-trained models are released at this https URL.</div></details></td>
        <td>COCO/mAP 75</td>
2247 2248
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2249 2250
    </tr>
    <tr>
2251 2252 2253
        <td>28</td>
        <td>jde_darknet53_30e_108</br>8x608</td>
        <td><a href="https://github.com/Zhongdao/Towards-Realtime-MOT">Towards Real-Time Multi-Object Tracking</a></td>
2254 2255
        <td><details><summary>Abstract</summary><div>Modern multiple object tracking (MOT) systems usually follow the \emph{tracking-by-detection} paradigm. It has 1) a detection model for target localization and 2) an appearance embedding model for data association. Having the two models separately executed might lead to efficiency problems, as the running time is simply a sum of the two steps without investigating potential structures that can be shared between them. Existing research efforts on real-time MOT usually focus on the association step, so they are essentially real-time association methods but not real-time MOT system. In this paper, we propose an MOT system that allows target detection and appearance embedding to be learned in a shared model. Specifically, we incorporate the appearance embedding model into a single-shot detector, such that the model can simultaneously output detections and the corresponding embeddings. We further propose a simple and fast association method that works in conjunction with the joint model. In both components the computation cost is significantly reduced compared with former MOT systems, resulting in a neat and fast baseline for future follow-ups on real-time MOT algorithm design. To our knowledge, this work reports the first (near) real-time MOT system, with a running speed of 22 to 40 FPS depending on the input resolution. Meanwhile, its tracking accuracy is comparable to the state-of-the-art trackers embodying separate detection and embedding (SDE) learning (64.4% MOTA \vs 66.1% MOTA on MOT-16 challenge). Code and models are available at \url{this https URL}.</div></details></td>
        <td>COCO/mAP 72</td>
2256 2257
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2258 2259
    </tr>
    <tr>
2260
        <td>29</td>
2261
        <td>yolov3_darknet53_270e</br>_coco</td>
2262
        <td><a href="https://pjreddie.com/darknet/yolo/">YOLOv3: An Incremental Improvement</a></td>
C
Chen Long 已提交
2263
        <td><details><summary>Abstract</summary><div>We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL</div></details></td>
2264
        <td>COCO/mAP 33</td>
2265 2266
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2267 2268
    </tr>
    <tr>
2269
        <td>30</td>
2270
        <td>yolov3_darknet53_270e</br>_coco_FPGM</td>
2271
        <td><a href="https://pjreddie.com/darknet/yolo/">YOLOv3: An Incremental Improvement</a></td>
C
Chen Long 已提交
2272
        <td><details><summary>Abstract</summary><div>We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL</div></details></td>
2273
        <td>-</td>
2274 2275
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2276 2277
    </tr>
    <tr>
2278
        <td>31</td>
2279
        <td>yolov3_darknet53_270e</br>_coco_PACT</td>
2280
        <td><a href="https://pjreddie.com/darknet/yolo/">YOLOv3: An Incremental Improvement</a></td>
C
Chen Long 已提交
2281
        <td><details><summary>Abstract</summary><div>We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL</div></details></td>
2282
        <td>-</td>
2283 2284
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2285 2286
    </tr>
    <tr>
2287
        <td>32</td>
2288
        <td>yolov3_darknet53_270e</br>_coco_KL</td>
2289
        <td><a href="https://pjreddie.com/darknet/yolo/">YOLOv3: An Incremental Improvement</a></td>
C
Chen Long 已提交
2290
        <td><details><summary>Abstract</summary><div>We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL</div></details></td>
2291
        <td>-</td>
2292 2293
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2294 2295
    </tr>
    <tr>
2296
        <td>33</td>
2297
        <td>ppyolo_mbv3_large_coc</br>o</td>
2298
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection">PP-YOLO: An Effective and Efficient Implementation of Object Detector</a></td>
2299 2300
        <td><details><summary>Abstract</summary><div>Object detection is one of the most important areas in computer vision, which plays a key role in various practical scenarios. Due to limitation of hardware, it is often necessary to sacrifice accuracy to ensure the infer speed of the detector in practice. Therefore, the balance between effectiveness and efficiency of object detector must be considered. The goal of this paper is to implement an object detector with relatively balanced effectiveness and efficiency that can be directly applied in actual application scenarios, rather than propose a novel detection model. Considering that YOLOv3 has been widely used in practice, we develop a new object detector based on YOLOv3. We mainly try to combine various existing tricks that almost not increase the number of model parameters and FLOPs, to achieve the goal of improving the accuracy of detector as much as possible while ensuring that the speed is almost unchanged. Since all experiments in this paper are conducted based on PaddlePaddle, we call it PP-YOLO. By combining multiple tricks, PP-YOLO can achieve a better balance between effectiveness (45.2% mAP) and efficiency (72.9 FPS), surpassing the existing state-of-the-art detectors such as EfficientDet and YOLOv4.Source code is at this https URL.</div></details></td>
        <td>COCO/mAP 23.2</td>
2301 2302
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2303 2304
    </tr>
    <tr>
2305
        <td>34</td>
2306
        <td>ppyolo_mbv3_large_coc</br>o_PACT</td>
2307
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection">PP-YOLO: An Effective and Efficient Implementation of Object Detector</a></td>
2308 2309
        <td><details><summary>Abstract</summary><div>Object detection is one of the most important areas in computer vision, which plays a key role in various practical scenarios. Due to limitation of hardware, it is often necessary to sacrifice accuracy to ensure the infer speed of the detector in practice. Therefore, the balance between effectiveness and efficiency of object detector must be considered. The goal of this paper is to implement an object detector with relatively balanced effectiveness and efficiency that can be directly applied in actual application scenarios, rather than propose a novel detection model. Considering that YOLOv3 has been widely used in practice, we develop a new object detector based on YOLOv3. We mainly try to combine various existing tricks that almost not increase the number of model parameters and FLOPs, to achieve the goal of improving the accuracy of detector as much as possible while ensuring that the speed is almost unchanged. Since all experiments in this paper are conducted based on PaddlePaddle, we call it PP-YOLO. By combining multiple tricks, PP-YOLO can achieve a better balance between effectiveness (45.2% mAP) and efficiency (72.9 FPS), surpassing the existing state-of-the-art detectors such as EfficientDet and YOLOv4.Source code is at this https URL.</div></details></td>
        <td>-</td>
2310 2311
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2312 2313
    </tr>
    <tr>
2314
        <td>35</td>
2315
        <td>ppyolo_mbv3_large_coc</br>o_KL</td>
2316
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection">PP-YOLO: An Effective and Efficient Implementation of Object Detector</a></td>
2317 2318
        <td><details><summary>Abstract</summary><div>Object detection is one of the most important areas in computer vision, which plays a key role in various practical scenarios. Due to limitation of hardware, it is often necessary to sacrifice accuracy to ensure the infer speed of the detector in practice. Therefore, the balance between effectiveness and efficiency of object detector must be considered. The goal of this paper is to implement an object detector with relatively balanced effectiveness and efficiency that can be directly applied in actual application scenarios, rather than propose a novel detection model. Considering that YOLOv3 has been widely used in practice, we develop a new object detector based on YOLOv3. We mainly try to combine various existing tricks that almost not increase the number of model parameters and FLOPs, to achieve the goal of improving the accuracy of detector as much as possible while ensuring that the speed is almost unchanged. Since all experiments in this paper are conducted based on PaddlePaddle, we call it PP-YOLO. By combining multiple tricks, PP-YOLO can achieve a better balance between effectiveness (45.2% mAP) and efficiency (72.9 FPS), surpassing the existing state-of-the-art detectors such as EfficientDet and YOLOv4.Source code is at this https URL.</div></details></td>
        <td>-</td>
2319 2320
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2321 2322
    </tr>
    <tr>
2323
        <td>36</td>
2324
        <td>ppyolo_r50vd_dcn_1x_c</br>oco</td>
2325
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection">PP-YOLO: An Effective and Efficient Implementation of Object Detector</a></td>
2326 2327
        <td><details><summary>Abstract</summary><div>Object detection is one of the most important areas in computer vision, which plays a key role in various practical scenarios. Due to limitation of hardware, it is often necessary to sacrifice accuracy to ensure the infer speed of the detector in practice. Therefore, the balance between effectiveness and efficiency of object detector must be considered. The goal of this paper is to implement an object detector with relatively balanced effectiveness and efficiency that can be directly applied in actual application scenarios, rather than propose a novel detection model. Considering that YOLOv3 has been widely used in practice, we develop a new object detector based on YOLOv3. We mainly try to combine various existing tricks that almost not increase the number of model parameters and FLOPs, to achieve the goal of improving the accuracy of detector as much as possible while ensuring that the speed is almost unchanged. Since all experiments in this paper are conducted based on PaddlePaddle, we call it PP-YOLO. By combining multiple tricks, PP-YOLO can achieve a better balance between effectiveness (45.2% mAP) and efficiency (72.9 FPS), surpassing the existing state-of-the-art detectors such as EfficientDet and YOLOv4.Source code is at this https URL.</div></details></td>
        <td>COCO/mAP 44.8</td>
2328 2329
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2330 2331
    </tr>
    <tr>
2332
        <td>37</td>
2333
        <td>ppyolo_r50vd_dcn_1x_c</br>oco_FPGM</td>
2334
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection">PP-YOLO: An Effective and Efficient Implementation of Object Detector</a></td>
2335 2336
        <td><details><summary>Abstract</summary><div>Object detection is one of the most important areas in computer vision, which plays a key role in various practical scenarios. Due to limitation of hardware, it is often necessary to sacrifice accuracy to ensure the infer speed of the detector in practice. Therefore, the balance between effectiveness and efficiency of object detector must be considered. The goal of this paper is to implement an object detector with relatively balanced effectiveness and efficiency that can be directly applied in actual application scenarios, rather than propose a novel detection model. Considering that YOLOv3 has been widely used in practice, we develop a new object detector based on YOLOv3. We mainly try to combine various existing tricks that almost not increase the number of model parameters and FLOPs, to achieve the goal of improving the accuracy of detector as much as possible while ensuring that the speed is almost unchanged. Since all experiments in this paper are conducted based on PaddlePaddle, we call it PP-YOLO. By combining multiple tricks, PP-YOLO can achieve a better balance between effectiveness (45.2% mAP) and efficiency (72.9 FPS), surpassing the existing state-of-the-art detectors such as EfficientDet and YOLOv4.Source code is at this https URL.</div></details></td>
        <td>-</td>
2337 2338
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2339 2340
    </tr>
    <tr>
2341
        <td>38</td>
2342
        <td>ppyolov2_r50vd_dcn_36</br>5e_coco</td>
2343
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection">PP-YOLOv2: A Practical Object Detector</a></td>
2344 2345
        <td><details><summary>Abstract</summary><div>Being effective and efficient is essential to an object detector for practical use. To meet these two concerns, we comprehensively evaluate a collection of existing refinements to improve the performance of PP-YOLO while almost keep the infer time unchanged. This paper will analyze a collection of refinements and empirically evaluate their impact on the final model performance through incremental ablation study. Things we tried that didn't work will also be discussed. By combining multiple effective refinements, we boost PP-YOLO's performance from 45.9% mAP to 49.5% mAP on COCO2017 test-dev. Since a significant margin of performance has been made, we present PP-YOLOv2. In terms of speed, PP-YOLOv2 runs in 68.9FPS at 640x640 input size. Paddle inference engine with TensorRT, FP16-precision, and batch size = 1 further improves PP-YOLOv2's infer speed, which achieves 106.5 FPS. Such a performance surpasses existing object detectors with roughly the same amount of parameters (i.e., YOLOv4-CSP, YOLOv5l). Besides, PP-YOLOv2 with ResNet101 achieves 50.3% mAP on COCO2017 test-dev. Source code is at this https URL.</div></details></td>
        <td>COCO/mAP 49.1</td>
2346 2347
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2348 2349
    </tr>
    <tr>
2350 2351 2352
        <td>39</td>
        <td>deformable_detr_r50_1</br>x_coco</td>
        <td><a href="https://github.com/fundamentalvision/Deformable-DETR">Deformable DETR: Deformable Transformers for End-to-End Object Detection</a></td>
2353 2354
        <td><details><summary>Abstract</summary><div>-</div></details></td>
        <td>-</td>
2355 2356
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2357 2358
    </tr>
    <tr>
2359 2360 2361
        <td>40</td>
        <td>detr_r50_1x_coco</td>
        <td><a href="https://github.com/facebookresearch/detr">DETR: End-to-End Object Detection with Transformers</a></td>
2362 2363
        <td><details><summary>Abstract</summary><div>-</div></details></td>
        <td>-</td>
2364 2365
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2366 2367
    </tr>
    <tr>
2368 2369 2370
        <td>41</td>
        <td>sparse_rcnn_r50_fpn_3</br>x_pro100_coco</td>
        <td><a href="https://github.com/PeizeSun/SparseR-CNN">Sparse R-CNN: End-to-End Object Detection with Learnable Proposals</a></td>
2371 2372
        <td><details><summary>Abstract</summary><div>-</div></details></td>
        <td>-</td>
2373 2374
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2375 2376
    </tr>
    <tr>
2377 2378 2379
        <td>42</td>
        <td>retinanet_r50_fpn_1x_</br>coco</td>
        <td><a href="https://github.com/facebookresearch/Detectron">Focal Loss for Dense Object Detection</a></td>
2380 2381
        <td><details><summary>Abstract</summary><div>-</div></details></td>
        <td>-</td>
2382 2383
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/docs/tutorials/GETTING_STARTED.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2384 2385
    </tr>
    <tr>
2386 2387 2388 2389 2390 2391 2392
        <td>43</td>
        <td>yolox_s_300e_coco</td>
        <td><a href="https://github.com/Megvii-BaseDetection/YOLOX">YOLOX: Exceeding YOLO Series in 2021</a></td>
        <td><details><summary>Abstract</summary><div>In this report, we present some experienced improvements to YOLO series, forming a new high-performance detector -- YOLOX. We switch the YOLO detector to an anchor-free manner and conduct other advanced detection techniques, i.e., a decoupled head and the leading label assignment strategy SimOTA to achieve state-of-the-art results across a large scale range of models: For YOLO-Nano with only 0.91M parameters and 1.08G FLOPs, we get 25.3% AP on COCO, surpassing NanoDet by 1.8% AP; for YOLOv3, one of the most widely used detectors in industry, we boost it to 47.3% AP on COCO, outperforming the current best practice by 3.0% AP; for YOLOX-L with roughly the same amount of parameters as YOLOv4-CSP, YOLOv5-L, we achieve 50.0% AP on COCO at a speed of 68.9 FPS on Tesla V100, exceeding YOLOv5-L by 1.8% AP. Further, we won the 1st Place on Streaming Perception Challenge (Workshop on Autonomous Driving at CVPR 2021) using a single YOLOX-L model. We hope this report can provide useful experience for developers and researchers in practical scenes, and we also provide deploy versions with ONNX, TensorRT, NCNN, and Openvino supported</div></details></td>
        <td>COCO: 50.1</td>
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2393 2394
    </tr>
    <tr>
2395 2396 2397 2398 2399 2400 2401
        <td>44</td>
        <td>tood_r50_fpn_1x_coco</td>
        <td><a href="https://github.com/fcjian/TOOD">TOOD: Task-aligned One-stage Object Detection</a></td>
        <td><details><summary>Abstract</summary><div>One-stage object detection is commonly implemented by optimizing two sub-tasks: object classification and localization, using heads with two parallel branches, which might lead to a certain level of spatial misalignment in predictions between the two tasks. In this work, we propose a Task-aligned One-stage Object Detection (TOOD) that explicitly aligns the two tasks in a learning-based manner. First, we design a novel Task-aligned Head (T-Head) which offers a better balance between learning task-interactive and task-specific features, as well as a greater flexibility to learn the alignment via a task-aligned predictor. Second, we propose Task Alignment Learning (TAL) to explicitly pull closer (or even unify) the optimal anchors for the two tasks during training via a designed sample assignment scheme and a task-aligned loss. Extensive experiments are conducted on MS-COCO, where TOOD achieves a 51.1 AP at single-model single-scale testing. This surpasses the recent one-stage detectors by a large margin, such as ATSS (47.7 AP), GFL (48.2 AP), and PAA (49.0 AP), with fewer parameters and FLOPs. Qualitative results also demonstrate the effectiveness of TOOD for better aligning the tasks of object classification and localization</div></details></td>
        <td>COCO: 42.5</td>
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2402 2403
    </tr>
    <tr>
2404 2405 2406 2407 2408 2409 2410
        <td>45</td>
        <td>gfl_r50_fpn_1x_coco</td>
        <td><a href="https://github.com/fcjian/TOOD">Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection</a></td>
        <td><details><summary>Abstract</summary><div>One-stage detector basically formulates object detection as dense classification and localization. The classification is usually optimized by Focal Loss and the box location is commonly learned under Dirac delta distribution. A recent trend for one-stage detectors is to introduce an individual prediction branch to estimate the quality of localization, where the predicted quality facilitates the classification to improve detection performance. This paper delves into the representations of the above three fundamental elements: quality estimation, classification and localization. Two problems are discovered in existing practices, including (1) the inconsistent usage of the quality estimation and classification between training and inference and (2) the inflexible Dirac delta distribution for localization when there is ambiguity and uncertainty in complex scenes. To address the problems, we design new representations for these elements. Specifically, we merge the quality estimation into the class prediction vector to form a joint representation of localization quality and classification, and use a vector to represent arbitrary distribution of box locations. The improved representations eliminate the inconsistency risk and accurately depict the flexible distribution in real data, but contain continuous labels, which is beyond the scope of Focal Loss. We then propose Generalized Focal Loss (GFL) that generalizes Focal Loss from its discrete form to the continuous version for successful optimization. On COCO test-dev, GFL achieves 45.0\% AP using ResNet-101 backbone, surpassing state-of-the-art SAPD (43.5\%) and ATSS (43.6\%) with higher or comparable inference speed, under the same backbone and training settings. Notably, our best model can achieve a single-model single-scale AP of 48.2\%, at 10 FPS on a single 2080Ti GPU</div></details></td>
        <td>COCO: 41.2</td>
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2411 2412
    </tr>
    <tr>
2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446
        <td>46</td>
        <td>PP-YOLOE-R</td>
        <td><a href="https://paperswithcode.com/sota/object-detection-in-aerial-images-on-dota-1">https://arxiv.org/abs/2211.02386v1</a></td>
        <td><details><summary>Abstract</summary><div>Arbitrary-oriented object detection is a fundamental task in visual scenes involving aerial images and scene text. In this report, we present PP-YOLOE-R, an efficient anchor-free rotated object detector based on PP-YOLOE. We introduce a bag of useful tricks in PP-YOLOE-R to improve detection precision with marginal extra parameters and computational cost. As a result, PP-YOLOE-R-l and PP-YOLOE-R-x achieve 78.14 and 78.28 mAP respectively on DOTA 1.0 dataset with single-scale training and testing, which outperform almost all other rotated object detectors. With multi-scale training and testing, PP-YOLOE-R-l and PP-YOLOE-R-x further improve the detection precision to 80.02 and 80.73 mAP. In this case, PP-YOLOE-R-x surpasses all anchor-free methods and demonstrates competitive performance to state-of-the-art anchor-based two-stage models. Further, PP-YOLOE-R is deployment friendly and PP-YOLOE-R-s/m/l/x can reach 69.8/55.1/48.3/37.1 FPS respectively on RTX 2080 Ti with TensorRT and FP16-precision. </div></details></td>
        <td>PP-YOLOE-R-s DOTA 1.0</br> map=73.82%</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/rotate/ppyoloe_r/README_en.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>47</td>
        <td>OC-SORT</td>
        <td><a href="https://paperswithcode.com/sota/multi-object-tracking-on-mot17">https://arxiv.org/abs/2203.14360</a></td>
        <td><details><summary>Abstract</summary><div>Multi-Object Tracking (MOT) has rapidly progressed with the development of object detection and re-identification. However, motion modeling, which facilitates object association by forecasting short-term trajectories with past observations, has been relatively under-explored in recent years. Current motion models in MOT typically assume that the object motion is linear in a small time window and needs continuous observations, so these methods are sensitive to occlusions and non-linear motion and require high frame-rate videos. In this work, we show that a simple motion model can obtain state-of-the-art tracking performance without other cues like appearance. We emphasize the role of "observation" when recovering tracks from being lost and reducing the error accumulated by linear motion models during the lost period. We thus name the proposed method as Observation-Centric SORT, OC-SORT for short. It remains simple, online, and real-time but improves robustness over occlusion and non-linear motion. It achieves 63.2 and 62.1 HOTA on MOT17 and MOT20, respectively, surpassing all published methods. It also sets new states of the art on KITTI Pedestrian Tracking and DanceTrack where the object motion is highly non-linear</div></details></td>
        <td>MOT-17 half train MOT</br>A=50.1%</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/configs/mot/ocsort/README.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>48</td>
        <td>ViTDET</td>
        <td><a href="https://paperswithcode.com/sota/object-detection-on-coco">https://arxiv.org/abs/2111.11429</a></td>
        <td><details><summary>Abstract</summary><div>Object detection is a central downstream task used to test if pre-trained network parameters confer benefits, such as improved accuracy or training speed. The complexity of object detection methods can make this benchmarking non-trivial when new architectures, such as Vision Transformer (ViT) models, arrive. These difficulties (e.g., architectural incompatibility, slow training, high memory consumption, unknown training formulae, etc.) have prevented recent studies from benchmarking detection transfer learning with standard ViT models. In this paper, we present training techniques that overcome these challenges, enabling the use of standard ViT models as the backbone of Mask R-CNN. These tools facilitate the primary goal of our study: we compare five ViT initializations, including recent state-of-the-art self-supervised learning methods, supervised initialization, and a strong random initialization baseline. Our results show that recent masking-based unsupervised learning methods may, for the first time, provide convincing transfer learning improvements on COCO, increasing box AP up to 4% (absolute) over supervised and prior self-supervised pre-training methods. Moreover, these masking-based initializations scale better, with the improvement growing as model size increases</div></details></td>
        <td>VIT-large ap=55.7%</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/vitdet">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>49</td>
        <td>FCOS-R</td>
        <td><a href="暂无">https://arxiv.org/abs/2111.10780</a></td>
        <td><details><summary>Abstract</summary><div>Existing anchor-base oriented object detection methods have achieved amazing results, but these methods require some manual preset boxes, which introduces additional hyperparameters and calculations. The existing anchor-free methods usually have complex architectures and are not easy to deploy. Our goal is to propose an algorithm which is simple and easy-to-deploy for aerial image detection. In this paper, we present a one-stage anchor-free rotated object detector (FCOSR) based on FCOS, which can be deployed on most platforms. The FCOSR has a simple architecture consisting of only convolution layers. Our work focuses on the label assignment strategy for the training phase. We use ellipse center sampling method to define a suitable sampling region for oriented bounding box (OBB). The fuzzy sample assignment strategy provides reasonable labels for overlapping objects. To solve the insufficient sampling problem, a multi-level sampling module is designed. These strategies allocate more appropriate labels to training samples. Our algorithm achieves 79.25, 75.41, and 90.15 mAP on DOTA1.0, DOTA1.5, and HRSC2016 datasets, respectively. FCOSR demonstrates superior performance to other methods in single-scale evaluation. We convert a lightweight FCOSR model to TensorRT format, which achieves 73.93 mAP on DOTA1.0 at a speed of 10.68 FPS on Jetson Xavier NX with single scale. </div></details></td>
        <td>暂无</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleDetection/blob/develop/configs/rotate/fcosr/README_en.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
2447
    </tr>
2448 2449 2450 2451
</table>

### PaddleSeg
<table>
C
Chen Long 已提交
2452
    <tr>
2453 2454 2455 2456 2457 2458
        <th>序号</th>
        <th>模型简称</th>
        <th>论文名称(链接)</th>
        <th>摘要</th>
        <th>数据集</th>
        <th width='10%'>快速开始</th>
2459
        </th>
2460 2461 2462 2463
    </tr>
    <tr>
        <td>1</td>
        <td>PP-HumanSeg-Server (D</br>eepLabv3p_resnet50)</td>
2464
        <td><a href="https://paperswithcode.com/paper/encoder-decoder-with-atrous-separable">Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation</a></td>
2465 2466
        <td><details><summary>Abstract</summary><div>Spatial pyramid pooling module or encode-decoder structureare used in deep neural networks for semantic segmentation task. Theformer networks are able to encode multi-scale contextual information byprobing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networkscan capture sharper object boundaries by gradually recovering the spatialinformation. In this work, we propose to combine the advantages fromboth methods. Specifically, our proposed model, DeepLabv3+, extendsDeepLabv3 by adding a simple yet effective decoder module to refine thesegmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolutionto both Atrous Spatial Pyramid Pooling and decoder modules, resultingin a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapesdatasets, achieving the test set performance of 89.0% and 82.1% withoutany post-processing. Our paper is accompanied with a publicly availablereference implementation of the proposed models in Tensorflow at https://github.com/tensorflow/models/tree/master/research/deeplab.</div></details></td>
        <td>内部人像数据集/mIoU=97.16%</td>
2467 2468
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2469 2470
    </tr>
    <tr>
2471
        <td>2</td>
2472 2473
        <td>PP-Matting</td>
        <td><a href="https://paperswithcode.com/paper/is-a-green-screen-really-necessary-for-real">Is a Green Screen Really Necessary for Real-Time Portrait Matting?</a></td>
2474 2475
        <td><details><summary>Abstract</summary><div>For portrait matting without the green screen, existing works either require auxiliary inputs that are costly to obtain or use multiple models that are computationally expensive. Consequently, they are unavailable in real-time applications. In contrast, we present a light-weight matting objective decomposition network (MODNet), which can process portrait matting from a single input image in real time. The design of MODNet benefits from optimizing a series of correlated sub-objectives simultaneously via explicit constraints. Moreover, since trimap-free methods usually suffer from the domain shift problem in practice, we introduce (1) a self-supervised strategy based on sub-objectives consistency to adapt MODNet to real-world data and (2) a one-frame delay trick to smooth the results when applying MODNet to portrait video sequence. MODNet is easy to be trained in an end-to-end style. It is much faster than contemporaneous matting methods and runs at 63 frames per second. On a carefully designed portrait matting benchmark newly proposed in this work, MODNet greatly outperforms prior trimap-free methods. More importantly, our method achieves remarkable results in daily photos and videos. Now, do you really need a green screen for real-time portrait matting?</div></details></td>
        <td>PPM-100/mIoU=112.73</td>
2476 2477
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2478 2479
    </tr>
    <tr>
2480 2481 2482
        <td>3</td>
        <td>FCN_HRNet_W18_small</td>
        <td><a href="https://paperswithcode.com/paper/190807919">Deep High-Resolution Representation Learning for Visual Recognition</a></td>
2483 2484
        <td><details><summary>Abstract</summary><div>High-resolution representations are essential for position-sensitive vision problems, such as human pose estimation, semantic segmentation, and object detection. Existing state-of-the-art frameworks first encode the input image as a low-resolution representation through a subnetwork that is formed by connecting high-to-low resolution convolutions \emph{in series} (e.g., ResNet, VGGNet), and then recover the high-resolution representation from the encoded low-resolution representation. Instead, our proposed network, named as High-Resolution Network (HRNet), maintains high-resolution representations through the whole process. There are two key characteristics: (i) Connect the high-to-low resolution convolution streams \emph{in parallel}; (ii) Repeatedly exchange the information across resolutions. The benefit is that the resulting representation is semantically richer and spatially more precise. We show the superiority of the proposed HRNet in a wide range of applications, including human pose estimation, semantic segmentation, and object detection, suggesting that the HRNet is a stronger backbone for computer vision problems. All the codes are available at~{\url{this https URL}}.</div></details></td>
        <td>内部人像数据集/mIoU=94.51%</td>
2485 2486
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2487 2488
    </tr>
    <tr>
2489 2490 2491
        <td>4</td>
        <td>FCN_HRNet_W18</td>
        <td><a href="https://paperswithcode.com/paper/190807919">Deep High-Resolution Representation Learning for Visual Recognition</a></td>
2492 2493
        <td><details><summary>Abstract</summary><div>High-resolution representations are essential for position-sensitive vision problems, such as human pose estimation, semantic segmentation, and object detection. Existing state-of-the-art frameworks first encode the input image as a low-resolution representation through a subnetwork that is formed by connecting high-to-low resolution convolutions \emph{in series} (e.g., ResNet, VGGNet), and then recover the high-resolution representation from the encoded low-resolution representation. Instead, our proposed network, named as High-Resolution Network (HRNet), maintains high-resolution representations through the whole process. There are two key characteristics: (i) Connect the high-to-low resolution convolution streams \emph{in parallel}; (ii) Repeatedly exchange the information across resolutions. The benefit is that the resulting representation is semantically richer and spatially more precise. We show the superiority of the proposed HRNet in a wide range of applications, including human pose estimation, semantic segmentation, and object detection, suggesting that the HRNet is a stronger backbone for computer vision problems. All the codes are available at~{\url{this https URL}}.</div></details></td>
        <td>内部人像数据集/mIoU=94.51%</td>
2494 2495
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2496 2497
    </tr>
    <tr>
2498
        <td>5</td>
2499
        <td>Fast-SCNN</td>
2500
        <td><a href="https://paperswithcode.com/paper/fast-scnn-fast-semantic-segmentation-network">Fast-SCNN: Fast Semantic Segmentation Network</a></td>
2501 2502
        <td><details><summary>Abstract</summary><div>The encoder-decoder framework is state-of-the-art for offline semantic image segmentation. Since the rise in autonomous systems, real-time computation is increasingly desirable. In this paper, we introduce fast segmentation convolutional neural network (Fast-SCNN), an above real-time semantic segmentation model on high resolution image data (1024x2048px) suited to efficient computation on embedded devices with low memory. Building on existing two-branch methods for fast segmentation, we introduce our `learning to downsample' module which computes low-level features for multiple resolution branches simultaneously. Our network combines spatial detail at high resolution with deep features extracted at lower resolution, yielding an accuracy of 68.0% mean intersection over union at 123.5 frames per second on Cityscapes. We also show that large scale pre-training is unnecessary. We thoroughly validate our metric in experiments with ImageNet pre-training and the coarse labeled data of Cityscapes. Finally, we show even faster computation with competitive results on subsampled inputs, without any network modifications.</div></details></td>
        <td>Cityscapes/mIoU=69.31</br>%</td>
2503 2504
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2505 2506
    </tr>
    <tr>
2507
        <td>6</td>
2508
        <td>OCRNet_HRNetW48</td>
2509
        <td><a href="https://paperswithcode.com/paper/object-contextual-representations-for">Object-Contextual Representations for Semantic Segmentation</a></td>
2510 2511
        <td><details><summary>Abstract</summary><div>In this paper, we address the semantic segmentation problem with a focus on the context aggregation strategy. Our motivation is that the label of a pixel is the category of the object that the pixel belongs to. We present a simple yet effective approach, object-contextual representations, characterizing a pixel by exploiting the representation of the corresponding object class. First, we learn object regions under the supervision of ground-truth segmentation. Second, we compute the object region representation by aggregating the representations of the pixels lying in the object region. Last, % the representation similarity we compute the relation between each pixel and each object region and augment the representation of each pixel with the object-contextual representation which is a weighted aggregation of all the object region representations according to their relations with the pixel. We empirically demonstrate that the proposed approach achieves competitive performance on various challenging semantic segmentation benchmarks: Cityscapes, ADE20K, LIP, PASCAL-Context, and COCO-Stuff. Cityscapes, ADE20K, LIP, PASCAL-Context, and COCO-Stuff. Our submission "HRNet + OCR + SegFix" achieves 1-st place on the Cityscapes leaderboard by the time of submission. Code is available at: https://git.io/openseg and https://git.io/HRNet.OCR. We rephrase the object-contextual representation scheme using the Transformer encoder-decoder framework. The details are presented in~Section3.3.</div></details></td>
        <td>Cityscapes/mIoU=80.67</br>%</td>
2512 2513
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2514 2515
    </tr>
    <tr>
2516
        <td>7</td>
2517
        <td>OCRNet_HRNetW18</td>
2518
        <td><a href="https://paperswithcode.com/paper/object-contextual-representations-for">Object-Contextual Representations for Semantic Segmentation</a></td>
2519 2520
        <td><details><summary>Abstract</summary><div>In this paper, we address the semantic segmentation problem with a focus on the context aggregation strategy. Our motivation is that the label of a pixel is the category of the object that the pixel belongs to. We present a simple yet effective approach, object-contextual representations, characterizing a pixel by exploiting the representation of the corresponding object class. First, we learn object regions under the supervision of ground-truth segmentation. Second, we compute the object region representation by aggregating the representations of the pixels lying in the object region. Last, % the representation similarity we compute the relation between each pixel and each object region and augment the representation of each pixel with the object-contextual representation which is a weighted aggregation of all the object region representations according to their relations with the pixel. We empirically demonstrate that the proposed approach achieves competitive performance on various challenging semantic segmentation benchmarks: Cityscapes, ADE20K, LIP, PASCAL-Context, and COCO-Stuff. Cityscapes, ADE20K, LIP, PASCAL-Context, and COCO-Stuff. Our submission "HRNet + OCR + SegFix" achieves 1-st place on the Cityscapes leaderboard by the time of submission. Code is available at: https://git.io/openseg and https://git.io/HRNet.OCR. We rephrase the object-contextual representation scheme using the Transformer encoder-decoder framework. The details are presented in~Section3.3.</div></details></td>
        <td>Cityscapes/mIoU=80.67</br>%</td>
2521 2522
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2523 2524
    </tr>
    <tr>
2525
        <td>8</td>
2526
        <td>BiSeNetv2</td>
2527
        <td><a href="https://paperswithcode.com/paper/bisenet-v2-bilateral-network-with-guided">BiSeNet V2: Bilateral Network with Guided Aggregation for Real-time Semantic Segmentation</a></td>
2528 2529
        <td><details><summary>Abstract</summary><div>The low-level details and high-level semantics are both essential to the semantic segmentation task. However, to speed up the model inference, current approaches almost always sacrifice the low-level details, which leads to a considerable accuracy decrease. We propose to treat these spatial details and categorical semantics separately to achieve high accuracy and high efficiency for realtime semantic segmentation. To this end, we propose an efficient and effective architecture with a good trade-off between speed and accuracy, termed Bilateral Segmentation Network (BiSeNet V2). This architecture involves: (i) a Detail Branch, with wide channels and shallow layers to capture low-level details and generate high-resolution feature representation; (ii) a Semantic Branch, with narrow channels and deep layers to obtain high-level semantic context. The Semantic Branch is lightweight due to reducing the channel capacity and a fast-downsampling strategy. Furthermore, we design a Guided Aggregation Layer to enhance mutual connections and fuse both types of feature representation. Besides, a booster training strategy is designed to improve the segmentation performance without any extra inference cost. Extensive quantitative and qualitative evaluations demonstrate that the proposed architecture performs favourably against a few state-of-the-art real-time semantic segmentation approaches. Specifically, for a 2,048x1,024 input, we achieve 72.6% Mean IoU on the Cityscapes test set with a speed of 156 FPS on one NVIDIA GeForce GTX 1080 Ti card, which is significantly faster than existing methods, yet we achieve better segmentation accuracy</div></details></td>
        <td>Cityscapes/mIoU=73.19</br>%</td>
2530 2531
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2532 2533
    </tr>
    <tr>
2534
        <td>9</td>
2535
        <td>ENet</td>
2536
        <td><a href="https://paperswithcode.com/paper/dual-attention-network-for-scene-segmentation">Dual Attention Network for Scene Segmentation</a></td>
2537 2538
        <td><details><summary>Abstract</summary><div>In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data. We make the code and trained model publicly available at https://github.com/junfu1115/DANet</div></details></td>
        <td>Cityscapes/mIoU=80.27</br>%</td>
2539 2540
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2541 2542
    </tr>
    <tr>
2543
        <td>10</td>
2544
        <td>SegFormer_B0</td>
2545
        <td><a href="https://paperswithcode.com/paper/segformer-simple-and-efficient-design-for">SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers</a></td>
2546 2547
        <td><details><summary>Abstract</summary><div>We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perception (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from different layers, and thus combining both local attention and global attention to render powerful representations. We show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our approach up to obtain a series of models from SegFormer-B0 to SegFormer-B5, reaching significantly better performance and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters, being 5x smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C. Code will be released at: github.com/NVlabs/SegFormer.</div></details></td>
        <td>Cityscapes/mIoU=76.73</br>%</td>
2548 2549
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2550 2551
    </tr>
    <tr>
2552
        <td>11</td>
2553
        <td>STDC_STDC1</td>
2554
        <td><a href="https://paperswithcode.com/paper/rethinking-bisenet-for-real-time-semantic">Rethinking BiSeNet For Real-time Semantic Segmentation</a></td>
2555
        <td><details><summary>Abstract</summary><div>BiSeNet has been proved to be a popular two-stream network for real-time segmentation. However, its principle of adding an extra path to encode spatial information is time-consuming, and the backbones borrowed from pretrained tasks, e.g., image classification, may be inefficient for image segmentation due to the deficiency of task-specific design. To handle these problems, we propose a novel and efficient structure named Short-Term Dense Concatenate network (STDC network) by removing structure redundancy. Specifically, we gradually reduce the dimension of feature maps and use the aggregation of them for image representation, which forms the basic module of STDC network. In the decoder, we propose a Detail Aggregation module by integrating the learning of spatial information into low-level layers in single-stream manner. Finally, the low-level features and deep features are fused to predict the final segmentation results. Extensive experiments on Cityscapes and CamVid dataset demonstrate the effectiveness of our method by achieving promising trade-off between segmentation accuracy and inference speed. On Cityscapes, we achieve 71.9% mIoU on the test set with a speed of 250.4 FPS on NVIDIA GTX 1080Ti, which is 45.2% faster than the latest methods, and achieve 76.8% mIoU with 97.0 FPS while inferring on higher resolution images.</div></details></td>
2556 2557 2558
        <td>Cityscapes/mIoU=74.74</br>%    </td>
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2559 2560
    </tr>
    <tr>
2561
        <td>12</td>
2562
        <td>PFPNNet</td>
2563
        <td><a href="https://paperswithcode.com/paper/dual-attention-network-for-scene-segmentation">Dual Attention Network for Scene Segmentation</a></td>
2564 2565
        <td><details><summary>Abstract</summary><div>In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data. We make the code and trained model publicly available at https://github.com/junfu1115/DANet</div></details></td>
        <td>Cityscapes/mIoU=80.27</br>%</td>
2566 2567
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2568 2569
    </tr>
    <tr>
2570 2571 2572
        <td>13</td>
        <td>    DDRNet_23(DDRNet)</td>
        <td><a href="https://paperswithcode.com/paper/dual-attention-network-for-scene-segmentation">Dual Attention Network for Scene Segmentation</a></td>
2573 2574
        <td><details><summary>Abstract</summary><div>In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data. We make the code and trained model publicly available at https://github.com/junfu1115/DANet</div></details></td>
        <td>Cityscapes/mIoU=80.27</br>%</td>
2575 2576
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2577 2578
    </tr>
    <tr>
2579 2580 2581
        <td>14</td>
        <td>    CCNet</td>
        <td><a href="https://paperswithcode.com/paper/dual-attention-network-for-scene-segmentation">Dual Attention Network for Scene Segmentation</a></td>
2582 2583
        <td><details><summary>Abstract</summary><div>In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data. We make the code and trained model publicly available at https://github.com/junfu1115/DANet</div></details></td>
        <td>Cityscapes/mIoU=80.27</br>%</td>
2584 2585
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2586 2587
    </tr>
    <tr>
2588
        <td>15</td>
2589
        <td>DeepLabv3p_resnet50_c</br>ityscapes</td>
2590
        <td><a href="https://paperswithcode.com/paper/dual-attention-network-for-scene-segmentation">Dual Attention Network for Scene Segmentation</a></td>
2591 2592
        <td><details><summary>Abstract</summary><div>In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data. We make the code and trained model publicly available at https://github.com/junfu1115/DANet</div></details></td>
        <td>Cityscapes/mIoU=80.27</br>%</td>
2593 2594
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2595 2596
    </tr>
    <tr>
2597
        <td>16</td>
2598
        <td>PP-LiteSeg(STDC-1)</td>
2599
        <td><a href="https://paperswithcode.com/paper/dual-attention-network-for-scene-segmentation">Dual Attention Network for Scene Segmentation</a></td>
2600 2601
        <td><details><summary>Abstract</summary><div>In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data. We make the code and trained model publicly available at https://github.com/junfu1115/DANet</div></details></td>
        <td>Cityscapes/mIoU=80.27</br>%</td>
2602 2603
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2604 2605
    </tr>
    <tr>
2606
        <td>17</td>
2607
        <td>PP-LiteSeg(STDC-2)</td>
2608
        <td><a href="https://paperswithcode.com/paper/dual-attention-network-for-scene-segmentation">Dual Attention Network for Scene Segmentation</a></td>
2609 2610
        <td><details><summary>Abstract</summary><div>In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data. We make the code and trained model publicly available at https://github.com/junfu1115/DANet</div></details></td>
        <td>Cityscapes/mIoU=80.27</br>%</td>
2611 2612
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
2613 2614
    </tr>
    <tr>
2615
        <td>18</td>
2616
        <td>GloRe</td>
2617
        <td><a href="https://paperswithcode.com/paper/graph-based-global-reasoning-networks">Graph-based global reasoning networks</a></td>
2618 2619
        <td><details><summary>Abstract</summary><div>Globally modeling and reasoning over relations betweenregions can be beneficial for many computer vision tasks onboth images and videos. Convolutional Neural Networks(CNNs) excel at modeling local relations by convolutionoperations, but they are typically inefficient at capturingglobal relations between distant regions and require stacking multiple convolution layers. In this work, we proposea new approach for reasoning globally in which a set offeatures are globally aggregated over the coordinate spaceand then projected to an interaction space where relationalreasoning can be efficiently computed. After reasoning,relation-aware features are distributed back to the originalcoordinate space for down-stream tasks. We further presenta highly efficient instantiation of the proposed approachand introduce the Global Reasoning unit (GloRe unit) thatimplements the coordinate-interaction space mapping byweighted global pooling and weighted broadcasting, andthe relation reasoning via graph convolution on a smallgraph in interaction space. The proposed GloRe unit islightweight, end-to-end trainable and can be easily pluggedinto existing CNNs for a wide range of tasks. Extensive experiments show our GloRe unit can consistently boost theperformance of state-of-the-art backbone architectures, including ResNet [15, 16], ResNeXt [33], SE-Net [18] andDPN [9], for both 2D and 3D CNNs, on image classification, semantic segmentation and video action recognitiontask.</div></details></td>
        <td>Cityscapes/Resnet50/m</br>IoU=78.26%</td>
2620 2621
        <td><a href="">快速开始</a></td>
        </td>
2622 2623
    </tr>
    <tr>
2624
        <td>19</td>
2625
        <td>BiSeNetV1</td>
2626
        <td><a href="https://paperswithcode.com/paper/bisenet-bilateral-segmentation-network-for">BiSeNet: Bilateral Segmentation Network for Real-time Semantic Segmentation</a></td>
2627 2628
        <td><details><summary>Abstract</summary><div>Semantic segmentation requires both rich spatial information and sizeable receptive field. However, modern approaches usually compromise spatial resolution to achieve real-time inference speed, which leads to poor performance. In this paper, we address this dilemma with a novel Bilateral Segmentation Network (BiSeNet). We first design a Spatial Path with a small stride to preserve the spatial information and generate high-resolution features. Meanwhile, a Context Path with a fast downsampling strategy is employed to obtain sufficient receptive field. On top of the two paths, we introduce a new Feature Fusion Module to combine features efficiently. The proposed architecture makes a right balance between the speed and segmentation performance on Cityscapes, CamVid, and COCO-Stuff datasets. Specifically, for a 2048x1024 input, we achieve 68.4% Mean IOU on the Cityscapes test dataset with speed of 105 FPS on one NVIDIA Titan XP card, which is significantly faster than the existing methods with comparable performance.</div></details></td>
        <td>Cityscapes/mIoU=75.19</br>%</td>
2629 2630
        <td><a href="">快速开始</a></td>
        </td>
2631 2632
    </tr>
    <tr>
2633 2634 2635
        <td>20</td>
        <td>UPERNet</td>
        <td><a href="https://paperswithcode.com/paper/fastfcn-rethinking-dilated-convolution-in-the">FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation</a></td>
2636 2637
        <td><details><summary>Abstract</summary><div>Modern approaches for semantic segmentation usually employ dilated convolutions in the backbone to extract high-resolution feature maps, which brings heavy computation complexity and memory footprint. To replace the time and memory consuming dilated convolutions, we propose a novel joint upsampling module named Joint Pyramid Upsampling (JPU) by formulating the task of extracting high-resolution feature maps into a joint upsampling problem. With the proposed JPU, our method reduces the computation complexity by more than three times without performance loss. Experiments show that JPU is superior to other upsampling modules, which can be plugged into many existing approaches to reduce computation complexity and improve performance. By replacing dilated convolutions with the proposed JPU module, our method achieves the state-of-the-art performance in Pascal Context dataset (mIoU of 53.13%) and ADE20K dataset (final score of 0.5584) while running 3 times faster.</div></details></td>
        <td>ADE20K/mIoU=43.76%</td>
2638 2639
        <td><a href="">快速开始</a></td>
        </td>
2640 2641
    </tr>
    <tr>
2642
        <td>21</td>
2643
        <td>HRNetW48Contrast</td>
2644
        <td><a href="https://paperswithcode.com/paper/exploring-cross-image-pixel-contrast-for">Exploring Cross-Image Pixel Contrast for Semantic Segmentation</a></td>
2645 2646
        <td><details><summary>Abstract</summary><div>Current semantic segmentation methods focus only on mining "local" context, i.e., dependencies between pixels within individual images, by context-aggregation modules (e.g., dilated convolution, neural attention) or structure-aware optimization criteria (e.g., IoU-like loss). However, they ignore "global" context of the training data, i.e., rich semantic relations between pixels across different images. Inspired by the recent advance in unsupervised contrastive representation learning, we propose a pixel-wise contrastive framework for semantic segmentation in the fully supervised setting. The core idea is to enforce pixel embeddings belonging to a same semantic class to be more similar than embeddings from different classes. It raises a pixel-wise metric learning paradigm for semantic segmentation, by explicitly exploring the structures of labeled pixels, which were rarely explored before. Our method can be effortlessly incorporated into existing segmentation frameworks without extra overhead during testing. We experimentally show that, with famous segmentation models (i.e., DeepLabV3, HRNet, OCR) and backbones (i.e., ResNet, HR-Net), our method brings consistent performance improvements across diverse datasets (i.e., Cityscapes, PASCAL-Context, COCO-Stuff, CamVid). We expect this work will encourage our community to rethink the current de facto training paradigm in fully supervised semantic segmentation.</div></details></td>
        <td>Cityscapes/mIoU=82.3%</td>
2647 2648
        <td><a href="">快速开始</a></td>
        </td>
2649 2650
    </tr>
    <tr>
2651
        <td>22</td>
2652
        <td>ENCNet</td>
2653
        <td><a href="https://paperswithcode.com/paper/context-encoding-for-semantic-segmentation">ENCNet: Context Encoding for Semantic Segmentation</a></td>
2654 2655
        <td><details><summary>Abstract</summary><div>Recent work has made significant progress in improving spatial resolution for pixelwise labeling with Fully Convolutional Network (FCN) framework by employing Dilated/Atrous convolution, utilizing multi-scale features and refining boundaries. In this paper, we explore the impact of global contextual information in semantic segmentation by introducing the Context Encoding Module, which captures the semantic context of scenes and selectively highlights class-dependent featuremaps. The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN. Our approach has achieved new state-of-the-art results 51.7% mIoU on PASCAL-Context, 85.9% mIoU on PASCAL VOC 2012. Our single model achieves a final score of 0.5567 on ADE20K test set, which surpass the winning entry of COCO-Place Challenge in 2017. In addition, we also explore how the Context Encoding Module can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset. Our 14 layer network has achieved an error rate of 3.45%, which is comparable with state-of-the-art approaches with over 10 times more layers. The source code for the complete system are publicly available.</div></details></td>
        <td>Cityscapes/mIoU=79.42</br>%</td>
2656 2657
        <td><a href="">快速开始</a></td>
        </td>
2658 2659
    </tr>
    <tr>
2660
        <td>23</td>
2661
        <td>ESPNetV1</td>
2662
        <td><a href="https://paperswithcode.com/paper/espnet-efficient-spatial-pyramid-of-dilated">ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation</a></td>
2663 2664
        <td><details><summary>Abstract</summary><div>We introduce a fast and efficient convolutional neural network, ESPNet, for semantic segmentation of high resolution images under resource constraints. ESPNet is based on a new convolutional module, efficient spatial pyramid (ESP), which is efficient in terms of computation, memory, and power. ESPNet is 22 times faster (on a standard GPU) and 180 times smaller than the state-of-the-art semantic segmentation network PSPNet, while its category-wise accuracy is only 8% less. We evaluated ESPNet on a variety of semantic segmentation datasets including Cityscapes, PASCAL VOC, and a breast biopsy whole slide image dataset. Under the same constraints on memory and computation, ESPNet outperforms all the current efficient CNN networks such as MobileNet, ShuffleNet, and ENet on both standard metrics and our newly introduced performance metrics that measure efficiency on edge devices. Our network can process high resolution images at a rate of 112 and 9 frames per second on a standard GPU and edge device, respectively.</div></details></td>
        <td>Cityscapes/mIoU=61.82</br>%</td>
2665 2666
        <td><a href="">快速开始</a></td>
        </td>
2667 2668
    </tr>
    <tr>
2669
        <td>24</td>
2670
        <td>ESPNetV2</td>
2671
        <td><a href="https://paperswithcode.com/paper/espnetv2-a-light-weight-power-efficient-and">ESPNetv2: A Light-weight, Power Efficient, and General Purpose Convolutional Neural Network</a></td>
2672 2673
        <td><details><summary>Abstract</summary><div>We introduce a light-weight, power efficient, and general purpose convolutional neural network, ESPNetv2, for modeling visual and sequential data. Our network uses group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters. The performance of our network is evaluated on four different tasks: (1) object classification, (2) semantic segmentation, (3) object detection, and (4) language modeling. Experiments on these tasks, including image classification on the ImageNet and language modeling on the PenTree bank dataset, demonstrate the superior performance of our method over the state-of-the-art methods. Our network outperforms ESPNet by 4-5% and has 2-4x fewer FLOPs on the PASCAL VOC and the Cityscapes dataset. Compared to YOLOv2 on the MS-COCO object detection, ESPNetv2 delivers 4.4% higher accuracy with 6x fewer FLOPs. Our experiments show that ESPNetv2 is much more power efficient than existing state-of-the-art efficient methods including ShuffleNets and MobileNets. Our code is open-source and available at https://github.com/sacmehta/ESPNetv2</div></details></td>
        <td>Cityscapes/mIoU=70.88</br>%</td>
2674 2675
        <td><a href="">快速开始</a></td>
        </td>
2676 2677
    </tr>
    <tr>
2678
        <td>25</td>
2679
        <td>DMNet</td>
2680
        <td><a href="https://paperswithcode.com/paper/dynamic-multi-scale-filters-for-semantic">Dynamic Multi-Scale Filters for Semantic Segmentation</a></td>
2681 2682
        <td><details><summary>Abstract</summary><div>Multi-scale representation provides an effective way to address scale variation of objects and stuff in semantic segmentation. Previous works construct multi-scale representation by utilizing different filter sizes, expanding filter sizes with dilated filters or pooling grids, and the parameters of these filters are fixed after training. These methods often suffer from heavy computational cost or have more parameters, and are not adaptive to the input image during inference. To address these problems, this paper proposes a Dynamic Multi-scale Network (DMNet) to adaptively capture multi-scale contents for predicting pixel-level semantic labels. DMNet is composed of multiple Dynamic Convolutional Modules (DCMs) arranged in parallel, each of which exploits context-aware filters to estimate semantic representation for a specific scale. The outputs of multiple DCMs are further integrated for final segmentation. We conduct extensive experiments to evaluate our DMNet on three challenging semantic segmentation and scene parsing datasets, PASCAL VOC 2012, Pascal-Context, and ADE20K. DMNet achieves a new record 84.4% mIoU on PASCAL VOC 2012 test set without MS COCO pre-trained and post-processing, and also obtains state-of-the-art performance on Pascal-Context and ADE20K.</div></details></td>
        <td>Cityscapes/mIoU=79.67</br>%</td>
2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801
        <td><a href="">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>26</td>
        <td>PP-HumanSegV2</td>
        <td><a href="暂无">PP-HumanSeg-V2: Revisiting Real-Time Portrait Segmentation</a></td>
        <td><details><summary>Abstract</summary><div>We propose PP-Humanseg-V2, a novel model for real-time portrait segmentation task. Specifically, PP-HumanSeg-V2 employs the popular encoder-decoder architecture with a context aggregation module. First, PP-HumanSeg-V2 adopt simplified MobileNetV3 as backbone to extract hierarchical feature maps. Then, SPPM serves as context aggregation module to model long-range dependencies. Finally, we design a multi-level fusion module in the decoder to obtain the portrait segmentation result.Based on the experimental results on EG1800 and PP-HumanSeg14K dataset, PP-HumanSeg-V2 achieves a state-of-art performance in terms of segmentation accuracy, inference speed.</div></details></td>
        <td>暂无</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>27</td>
        <td>PP-MattingV2</td>
        <td><a href="暂无">PP-MattingV2 for Efficient Matting Task</a></td>
        <td><details><summary>Abstract</summary><div>暂无</div></details></td>
        <td>暂无</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleSeg">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>28</td>
        <td>LRASPP-MV3</td>
        <td><a href="https://paperswithcode.com/paper/searching-for-mobilenetv3">Searching for MobileNetV3</a></td>
        <td><details><summary>Abstract</summary><div>We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2\% more accurate on ImageNet classification while reducing latency by 15\% compared to MobileNetV2. MobileNetV3-Small is 4.6\% more accurate while reducing latency by 5\% compared to MobileNetV2. MobileNetV3-Large detection is 25\% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation.</div></details></td>
        <td>暂无</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleSeg/tree/develop/configs/lraspp">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>29</td>
        <td>UperNet</td>
        <td><a href="https://paperswithcode.com/paper/unified-perceptual-parsing-for-scene">Unified Perceptual Parsing for Scene Understanding</a></td>
        <td><details><summary>Abstract</summary><div>Humans recognize the visual world at multiple levels: we effortlessly categorize scenes and detect objects inside, while also identifying the textures and surfaces of the objects along with their different compositional parts. In this paper, we study a new task called Unified Perceptual Parsing, which requires the machine vision systems to recognize as many visual concepts as possible from a given image. A multi-task framework called UPerNet and a training strategy are developed to learn from heterogeneous image annotations. We benchmark our framework on Unified Perceptual Parsing and show that it is able to effectively segment a wide range of concepts from images. The trained networks are further applied to discover visual knowledge in natural scenes</div></details></td>
        <td>暂无</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleSeg/tree/develop/configs/lraspp">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>30</td>
        <td>TopFormer</td>
        <td><a href="https://paperswithcode.com/paper/topformer-token-pyramid-transformer-for">TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation</a></td>
        <td><details><summary>Abstract</summary><div>Although vision transformers (ViTs) have achieved great success in computer vision, the heavy computational cost hampers their applications to dense prediction tasks such as semantic segmentation on mobile devices. In this paper, we present a mobile-friendly architecture named \textbf{To}ken \textbf{P}yramid Vision Trans\textbf{former} (\textbf{TopFormer}). The proposed \textbf{TopFormer} takes Tokens from various scales as input to produce scale-aware semantic features, which are then injected into the corresponding tokens to augment the representation. Experimental results demonstrate that our method significantly outperforms CNN- and ViT-based networks across several semantic segmentation datasets and achieves a good trade-off between accuracy and latency. On the ADE20K dataset, TopFormer achieves 5\% higher accuracy in mIoU than MobileNetV3 with lower latency on an ARM-based mobile device. Furthermore, the tiny version of TopFormer achieves real-time inference on an ARM-based mobile device with competitive results. The code and models are available at: https://github.com/hustvl/TopFormer</div></details></td>
        <td>暂无</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleSeg/tree/develop/configs/topformer">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>31</td>
        <td>MscaleOCRNet-PSA</td>
        <td><a href="https://paperswithcode.com/paper/polarized-self-attention-towards-high-quality-1">PSA: Polarized Self-Attention: Towards High-quality Pixel-wise Regression</a></td>
        <td><details><summary>Abstract</summary><div>Pixel-wise regression is probably the most common problem in fine-grained computer vision tasks, such as estimating keypoint heatmaps and segmentation masks. These regression problems are very challenging particularly because they require, at low computation overheads, modeling long-range dependencies on high-resolution inputs/outputs to estimate the highly nonlinear pixel-wise semantics. While attention mechanisms in Deep Convolutional Neural Networks(DCNNs) has become popular for boosting long-range dependencies, element-specific attention, such as Nonlocal blocks, is highly complex and noise-sensitive to learn, and most of simplified attention hybrids try to reach the best compromise among multiple types of tasks. In this paper, we present the Polarized Self-Attention(PSA) block that incorporates two critical designs towards high-quality pixel-wise regression: (1) Polarized filtering: keeping high internal resolution in both channel and spatial attention computation while completely collapsing input tensors along their counterpart dimensions. (2) Enhancement: composing non-linearity that directly fits the output distribution of typical fine-grained regression, such as the 2D Gaussian distribution (keypoint heatmaps), or the 2D Binormial distribution (binary segmentation masks). PSA appears to have exhausted the representation capacity within its channel-only and spatial-only branches, such that there is only marginal metric differences between its sequential and parallel layouts. Experimental results show that PSA boosts standard baselines by  points, and boosts state-of-the-arts by  points on 2D pose estimation and semantic segmentation benchmarks.</div></details></td>
        <td>暂无</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleSeg/tree/develop/configs/mscale_ocrnet">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>32</td>
        <td>RTFormer</td>
        <td><a href="https://paperswithcode.com/paper/rtformer-efficient-design-for-real-time">RTFormer: Efficient Design for Real-Time Semantic Segmentation with Transformer</a></td>
        <td><details><summary>Abstract</summary><div>Recently, transformer-based networks have shown impressive results in semantic segmentation. Yet for real-time semantic segmentation, pure CNN-based approaches still dominate in this field, due to the time-consuming computation mechanism of transformer. We propose RTFormer, an efficient dual-resolution transformer for real-time semantic segmenation, which achieves better trade-off between performance and efficiency than CNN-based models. To achieve high inference efficiency on GPU-like devices, our RTFormer leverages GPU-Friendly Attention with linear complexity and discards the multi-head mechanism. Besides, we find that cross-resolution attention is more efficient to gather global context information for high-resolution branch by spreading the high level knowledge learned from low-resolution branch. Extensive experiments on mainstream benchmarks demonstrate the effectiveness of our proposed RTFormer, it achieves state-of-the-art on Cityscapes, CamVid and COCOStuff, and shows promising results on ADE20K. Code is available at PaddleSeg: https://github.com/PaddlePaddle/PaddleSeg.</div></details></td>
        <td>暂无</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleSeg/tree/develop/configs/rtformer">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>33</td>
        <td>UHRNet</td>
        <td><a href="https://paperswithcode.com/paper/u-hrnet-delving-into-improving-semantic">U-HRNet: Delving into Improving Semantic Representation of High Resolution Network for Dense Prediction</a></td>
        <td><details><summary>Abstract</summary><div>High resolution and advanced semantic representation are both vital for dense prediction. Empirically, low-resolution feature maps often achieve stronger semantic representation, and high-resolution feature maps generally can better identify local features such as edges, but contains weaker semantic information. Existing state-of-the-art frameworks such as HRNet has kept low-resolution and high-resolution feature maps in parallel, and repeatedly exchange the information across different resolutions. However, we believe that the lowest-resolution feature map often contains the strongest semantic information, and it is necessary to go through more layers to merge with high-resolution feature maps, while for high-resolution feature maps, the computational cost of each convolutional layer is very large, and there is no need to go through so many layers. Therefore, we designed a U-shaped High-Resolution Network (U-HRNet), which adds more stages after the feature map with strongest semantic representation and relaxes the constraint in HRNet that all resolutions need to be calculated parallel for a newly added stage. More calculations are allocated to low-resolution feature maps, which significantly improves the overall semantic representation. U-HRNet is a substitute for the HRNet backbone and can achieve significant improvement on multiple semantic segmentation and depth prediction datasets, under the exactly same training and inference setting, with almost no increasing in the amount of calculation. Code is available at PaddleSeg: https://github.com/PaddlePaddle/PaddleSeg.</div></details></td>
        <td>暂无</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleSeg/tree/develop/configs/uhrnet">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>34</td>
        <td>UNETR</td>
        <td><a href="https://paperswithcode.com/paper/unetr-transformers-for-3d-medical-image">UNETR: Transformers for 3D Medical Image Segmentation</a></td>
        <td><details><summary>Abstract</summary><div>Fully Convolutional Neural Networks (FCNNs) with contracting and expanding paths have shown prominence for the majority of medical image segmentation applications since the past decade. In FCNNs, the encoder plays an integral role by learning both global and local features and contextual representations which can be utilized for semantic output prediction by the decoder. Despite their success, the locality of convolutional layers in FCNNs, limits the capability of learning long-range spatial dependencies. Inspired by the recent success of transformers for Natural Language Processing (NLP) in long-range sequence learning, we reformulate the task of volumetric (3D) medical image segmentation as a sequence-to-sequence prediction problem. We introduce a novel architecture, dubbed as UNEt TRansformers (UNETR), that utilizes a transformer as the encoder to learn sequence representations of the input volume and effectively capture the global multi-scale information, while also following the successful "U-shaped" network design for the encoder and decoder. The transformer encoder is directly connected to a decoder via skip connections at different resolutions to compute the final semantic segmentation output. We have validated the performance of our method on the Multi Atlas Labeling Beyond The Cranial Vault (BTCV) dataset for multi-organ segmentation and the Medical Segmentation Decathlon (MSD) dataset for brain tumor and spleen segmentation tasks. Our benchmarks demonstrate new state-of-the-art performance on the BTCV leaderboard. Code: https://monai.io/research/unetr</div></details></td>
        <td>MSD-brain/Dice=71.8%</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/MedicalSeg/configs/msd_brain_seg/README.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>35</td>
        <td>TransUnet</td>
        <td><a href="https://paperswithcode.com/paper/transunet-transformers-make-strong-encoders">TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation</a></td>
        <td><details><summary>Abstract</summary><div>Medical image segmentation is an essential prerequisite for developing healthcare systems, especially for disease diagnosis and treatment planning. On various medical image segmentation tasks, the u-shaped architecture, also known as U-Net, has become the de-facto standard and achieved tremendous success. However, due to the intrinsic locality of convolution operations, U-Net generally demonstrates limitations in explicitly modeling long-range dependency. Transformers, designed for sequence-to-sequence prediction, have emerged as alternative architectures with innate global self-attention mechanisms, but can result in limited localization abilities due to insufficient low-level details. In this paper, we propose TransUNet, which merits both Transformers and U-Net, as a strong alternative for medical image segmentation. On one hand, the Transformer encodes tokenized image patches from a convolution neural network (CNN) feature map as the input sequence for extracting global contexts. On the other hand, the decoder upsamples the encoded features which are then combined with the high-resolution CNN feature maps to enable precise localization. We argue that Transformers can serve as strong encoders for medical image segmentation tasks, with the combination of U-Net to enhance finer details by recovering localized spatial information. TransUNet achieves superior performances to various competing methods on different medical applications including multi-organ segmentation and cardiac segmentation. Code and models are available at https://github.com/Beckschen/TransUNet.</div></details></td>
        <td>synapse/Dice=81.05%</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/MedicalSeg/configs/synapse/README.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>36</td>
        <td>nnFormer</td>
        <td><a href="https://paperswithcode.com/method/nnformer">nnFormer: Interleaved Transformer for Volumetric Segmentation</a></td>
        <td><details><summary>Abstract</summary><div>nnFormer, or not-another transFormer, is a semantic segmentation model with an interleaved architecture based on empirical combination of self-attention and convolution. Firstly, a light-weight convolutional embedding layer ahead is used ahead of transformer blocks. In comparison to directly flattening raw pixels and applying 1D pre-processing, the convolutional embedding layer encodes precise (i.e., pixel-level) spatial information and provide low-level yet high-resolution 3D features. After the embedding block, transformer and convolutional down-sampling blocks are interleaved to fully entangle long-term dependencies with high-level and hierarchical object concepts at various scales, which helps improve the generalization ability and robustness of learned representations.</div></details></td>
        <td>acdc/dice=91.78%</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/MedicalSeg/configs/acdc/README.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>37</td>
        <td>SwinUNet</td>
        <td><a href="https://paperswithcode.com/paper/swin-unet-unet-like-pure-transformer-for">Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation</a></td>
        <td><details><summary>Abstract</summary><div>In the past few years, convolutional neural networks (CNNs) have achieved milestones in medical image analysis. Especially, the deep neural networks based on U-shaped architecture and skip-connections have been widely applied in a variety of medical image tasks. However, although CNN has achieved excellent performance, it cannot learn global and long-range semantic information interaction well due to the locality of the convolution operation. In this paper, we propose Swin-Unet, which is an Unet-like pure Transformer for medical image segmentation. The tokenized image patches are fed into the Transformer-based U-shaped Encoder-Decoder architecture with skip-connections for local-global semantic feature learning. Specifically, we use hierarchical Swin Transformer with shifted windows as the encoder to extract context features. And a symmetric Swin Transformer-based decoder with patch expanding layer is designed to perform the up-sampling operation to restore the spatial resolution of the feature maps. Under the direct down-sampling and up-sampling of the inputs and outputs by 4x, experiments on multi-organ and cardiac segmentation tasks demonstrate that the pure Transformer-based U-shaped Encoder-Decoder network outperforms those methods with full-convolution or the combination of transformer and convolution. The codes and trained models will be publicly available at https://github.com/HuCaoFighting/Swin-Unet.</div></details></td>
        <td>synapse/Dice=82.062%</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/MedicalSeg/configs/synapse/README.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>38</td>
        <td>nnUNet</td>
        <td><a href="https://paperswithcode.com/paper/nnu-net-self-adapting-framework-for-u-net"> nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation</a></td>
        <td><details><summary>Abstract</summary><div>The U-Net was presented in 2015. With its straight-forward and successful architecture it quickly evolved to a commonly used benchmark in medical image segmentation. The adaptation of the U-Net to novel problems, however, comprises several degrees of freedom regarding the exact architecture, preprocessing, training and inference. These choices are not independent of each other and substantially impact the overall performance. The present paper introduces the nnU-Net ('no-new-Net'), which refers to a robust and self-adapting framework on the basis of 2D and 3D vanilla U-Nets. We argue the strong case for taking away superfluous bells and whistles of many proposed network designs and instead focus on the remaining aspects that make out the performance and generalizability of a method. We evaluate the nnU-Net in the context of the Medical Segmentation Decathlon challenge, which measures segmentation performance in ten disciplines comprising distinct entities, image modalities, image geometries and dataset sizes, with no manual adjustments between datasets allowed. At the time of manuscript submission, nnU-Net achieves the highest mean dice scores across all classes and seven phase 1 tasks (except class 1 in BrainTumour) in the online leaderboard of the challenge.</div></details></td>
        <td>MSD-lung/Dice=68.281%</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/MedicalSeg/configs/nnunet/msd_lung/README.md">快速开始</a></td>
        </td>
2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813
    </tr>
</table>

### PaddleOCR
<table>
    <tr>
        <th>序号</th>
        <th>模型简称</th>
        <th>论文名称(链接)</th>
        <th>摘要</th>
        <th>数据集</th>
        <th width='10%'>快速开始</th>
2814
        </th>
2815 2816 2817 2818
    </tr>
    <tr>
        <td>1</td>
        <td>ch_ppocr_mobile_v2.0_</br>det</td>
2819
        <td><a href="https://paperswithcode.com/paper/pp-ocr-a-practical-ultra-lightweight-ocr">PP-OCR: A Practical Ultra Lightweight OCR System</a></td>
2820 2821
        <td><details><summary>Abstract</summary><div>The Optical Character Recognition (OCR) systems have been widely used in various of application scenarios, such as office automation (OA) systems, factory automations, online educations, map productions etc. However, OCR is still a challenging task due to the various of text appearances and the demand of computational efficiency. In this paper, we propose a practical ultra lightweight OCR system, i.e., PP-OCR. The overall model size of the PP-OCR is only 3.5M for recognizing 6622 Chinese characters and 2.8M for recognizing 63 alphanumeric symbols, respectively. We introduce a bag of strategies to either enhance the model ability or reduce the model size. The corresponding ablation experiments with the real data are also provided. Meanwhile, several pre-trained models for the Chinese and English recognition are released, including a text detector (97K images are used), a direction classifier (600K images are used) as well as a text recognizer (17.9M images are used). Besides, the proposed PP-OCR are also verified in several other language recognition tasks, including French, Korean, Japanese and German. All of the above mentioned models are open-sourced and the codes are available in the GitHub repository, i.e., this https URL.</div></details></td>
        <td>-</td>
2822 2823
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.2/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
2824 2825 2826 2827
    </tr>
    <tr>
        <td>2</td>
        <td>ch_ppocr_mobile_v2.0_</br>det_FPGM</td>
2828
        <td><a href="https://paperswithcode.com/paper/pp-ocr-a-practical-ultra-lightweight-ocr">PP-OCR: A Practical Ultra Lightweight OCR System</a></td>
2829 2830
        <td><details><summary>Abstract</summary><div>The Optical Character Recognition (OCR) systems have been widely used in various of application scenarios, such as office automation (OA) systems, factory automations, online educations, map productions etc. However, OCR is still a challenging task due to the various of text appearances and the demand of computational efficiency. In this paper, we propose a practical ultra lightweight OCR system, i.e., PP-OCR. The overall model size of the PP-OCR is only 3.5M for recognizing 6622 Chinese characters and 2.8M for recognizing 63 alphanumeric symbols, respectively. We introduce a bag of strategies to either enhance the model ability or reduce the model size. The corresponding ablation experiments with the real data are also provided. Meanwhile, several pre-trained models for the Chinese and English recognition are released, including a text detector (97K images are used), a direction classifier (600K images are used) as well as a text recognizer (17.9M images are used). Besides, the proposed PP-OCR are also verified in several other language recognition tasks, including French, Korean, Japanese and German. All of the above mentioned models are open-sourced and the codes are available in the GitHub repository, i.e., this https URL.</div></details></td>
        <td>-</td>
2831 2832
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.2/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
2833 2834 2835 2836
    </tr>
    <tr>
        <td>3</td>
        <td>ch_ppocr_mobile_v2.0_</br>det_PACT</td>
2837
        <td><a href="https://paperswithcode.com/paper/pp-ocr-a-practical-ultra-lightweight-ocr">PP-OCR: A Practical Ultra Lightweight OCR System</a></td>
2838 2839
        <td><details><summary>Abstract</summary><div>The Optical Character Recognition (OCR) systems have been widely used in various of application scenarios, such as office automation (OA) systems, factory automations, online educations, map productions etc. However, OCR is still a challenging task due to the various of text appearances and the demand of computational efficiency. In this paper, we propose a practical ultra lightweight OCR system, i.e., PP-OCR. The overall model size of the PP-OCR is only 3.5M for recognizing 6622 Chinese characters and 2.8M for recognizing 63 alphanumeric symbols, respectively. We introduce a bag of strategies to either enhance the model ability or reduce the model size. The corresponding ablation experiments with the real data are also provided. Meanwhile, several pre-trained models for the Chinese and English recognition are released, including a text detector (97K images are used), a direction classifier (600K images are used) as well as a text recognizer (17.9M images are used). Besides, the proposed PP-OCR are also verified in several other language recognition tasks, including French, Korean, Japanese and German. All of the above mentioned models are open-sourced and the codes are available in the GitHub repository, i.e., this https URL.</div></details></td>
        <td>-</td>
2840 2841
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.2/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
2842 2843 2844 2845
    </tr>
    <tr>
        <td>4</td>
        <td>ch_ppocr_mobile_v2.0_</br>det_KL</td>
2846
        <td><a href="https://paperswithcode.com/paper/pp-ocr-a-practical-ultra-lightweight-ocr">PP-OCR: A Practical Ultra Lightweight OCR System</a></td>
2847 2848
        <td><details><summary>Abstract</summary><div>The Optical Character Recognition (OCR) systems have been widely used in various of application scenarios, such as office automation (OA) systems, factory automations, online educations, map productions etc. However, OCR is still a challenging task due to the various of text appearances and the demand of computational efficiency. In this paper, we propose a practical ultra lightweight OCR system, i.e., PP-OCR. The overall model size of the PP-OCR is only 3.5M for recognizing 6622 Chinese characters and 2.8M for recognizing 63 alphanumeric symbols, respectively. We introduce a bag of strategies to either enhance the model ability or reduce the model size. The corresponding ablation experiments with the real data are also provided. Meanwhile, several pre-trained models for the Chinese and English recognition are released, including a text detector (97K images are used), a direction classifier (600K images are used) as well as a text recognizer (17.9M images are used). Besides, the proposed PP-OCR are also verified in several other language recognition tasks, including French, Korean, Japanese and German. All of the above mentioned models are open-sourced and the codes are available in the GitHub repository, i.e., this https URL.</div></details></td>
        <td>-</td>
2849 2850
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.2/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
2851 2852 2853 2854
    </tr>
    <tr>
        <td>5</td>
        <td>ch_ppocr_mobile_v2.0_</br>rec</td>
2855
        <td><a href="https://paperswithcode.com/paper/pp-ocr-a-practical-ultra-lightweight-ocr">PP-OCR: A Practical Ultra Lightweight OCR System</a></td>
2856 2857
        <td><details><summary>Abstract</summary><div>The Optical Character Recognition (OCR) systems have been widely used in various of application scenarios, such as office automation (OA) systems, factory automations, online educations, map productions etc. However, OCR is still a challenging task due to the various of text appearances and the demand of computational efficiency. In this paper, we propose a practical ultra lightweight OCR system, i.e., PP-OCR. The overall model size of the PP-OCR is only 3.5M for recognizing 6622 Chinese characters and 2.8M for recognizing 63 alphanumeric symbols, respectively. We introduce a bag of strategies to either enhance the model ability or reduce the model size. The corresponding ablation experiments with the real data are also provided. Meanwhile, several pre-trained models for the Chinese and English recognition are released, including a text detector (97K images are used), a direction classifier (600K images are used) as well as a text recognizer (17.9M images are used). Besides, the proposed PP-OCR are also verified in several other language recognition tasks, including French, Korean, Japanese and German. All of the above mentioned models are open-sourced and the codes are available in the GitHub repository, i.e., this https URL.</div></details></td>
        <td>-</td>
2858 2859
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.2/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
2860 2861 2862 2863
    </tr>
    <tr>
        <td>6</td>
        <td>ch_ppocr_mobile_v2.0_</br>rec_FPGM</td>
2864
        <td><a href="https://paperswithcode.com/paper/pp-ocr-a-practical-ultra-lightweight-ocr">PP-OCR: A Practical Ultra Lightweight OCR System</a></td>
2865 2866
        <td><details><summary>Abstract</summary><div>The Optical Character Recognition (OCR) systems have been widely used in various of application scenarios, such as office automation (OA) systems, factory automations, online educations, map productions etc. However, OCR is still a challenging task due to the various of text appearances and the demand of computational efficiency. In this paper, we propose a practical ultra lightweight OCR system, i.e., PP-OCR. The overall model size of the PP-OCR is only 3.5M for recognizing 6622 Chinese characters and 2.8M for recognizing 63 alphanumeric symbols, respectively. We introduce a bag of strategies to either enhance the model ability or reduce the model size. The corresponding ablation experiments with the real data are also provided. Meanwhile, several pre-trained models for the Chinese and English recognition are released, including a text detector (97K images are used), a direction classifier (600K images are used) as well as a text recognizer (17.9M images are used). Besides, the proposed PP-OCR are also verified in several other language recognition tasks, including French, Korean, Japanese and German. All of the above mentioned models are open-sourced and the codes are available in the GitHub repository, i.e., this https URL.</div></details></td>
        <td>-</td>
2867 2868
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.2/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
2869 2870 2871 2872
    </tr>
    <tr>
        <td>7</td>
        <td>ch_ppocr_mobile_v2.0_</br>rec_PACT</td>
2873
        <td><a href="https://paperswithcode.com/paper/pp-ocr-a-practical-ultra-lightweight-ocr">PP-OCR: A Practical Ultra Lightweight OCR System</a></td>
2874 2875
        <td><details><summary>Abstract</summary><div>The Optical Character Recognition (OCR) systems have been widely used in various of application scenarios, such as office automation (OA) systems, factory automations, online educations, map productions etc. However, OCR is still a challenging task due to the various of text appearances and the demand of computational efficiency. In this paper, we propose a practical ultra lightweight OCR system, i.e., PP-OCR. The overall model size of the PP-OCR is only 3.5M for recognizing 6622 Chinese characters and 2.8M for recognizing 63 alphanumeric symbols, respectively. We introduce a bag of strategies to either enhance the model ability or reduce the model size. The corresponding ablation experiments with the real data are also provided. Meanwhile, several pre-trained models for the Chinese and English recognition are released, including a text detector (97K images are used), a direction classifier (600K images are used) as well as a text recognizer (17.9M images are used). Besides, the proposed PP-OCR are also verified in several other language recognition tasks, including French, Korean, Japanese and German. All of the above mentioned models are open-sourced and the codes are available in the GitHub repository, i.e., this https URL.</div></details></td>
        <td>-</td>
2876 2877
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.2/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
2878 2879 2880 2881
    </tr>
    <tr>
        <td>8</td>
        <td>ch_ppocr_mobile_v2.0_</br>rec_KL</td>
2882
        <td><a href="https://paperswithcode.com/paper/pp-ocr-a-practical-ultra-lightweight-ocr">PP-OCR: A Practical Ultra Lightweight OCR System</a></td>
2883 2884
        <td><details><summary>Abstract</summary><div>The Optical Character Recognition (OCR) systems have been widely used in various of application scenarios, such as office automation (OA) systems, factory automations, online educations, map productions etc. However, OCR is still a challenging task due to the various of text appearances and the demand of computational efficiency. In this paper, we propose a practical ultra lightweight OCR system, i.e., PP-OCR. The overall model size of the PP-OCR is only 3.5M for recognizing 6622 Chinese characters and 2.8M for recognizing 63 alphanumeric symbols, respectively. We introduce a bag of strategies to either enhance the model ability or reduce the model size. The corresponding ablation experiments with the real data are also provided. Meanwhile, several pre-trained models for the Chinese and English recognition are released, including a text detector (97K images are used), a direction classifier (600K images are used) as well as a text recognizer (17.9M images are used). Besides, the proposed PP-OCR are also verified in several other language recognition tasks, including French, Korean, Japanese and German. All of the above mentioned models are open-sourced and the codes are available in the GitHub repository, i.e., this https URL.</div></details></td>
        <td>-</td>
2885 2886
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.2/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
2887 2888 2889 2890
    </tr>
    <tr>
        <td>9</td>
        <td>ch_ppocr_mobile_v2.0</td>
2891
        <td><a href="https://paperswithcode.com/paper/pp-ocr-a-practical-ultra-lightweight-ocr">PP-OCR: A Practical Ultra Lightweight OCR System</a></td>
2892 2893
        <td><details><summary>Abstract</summary><div>The Optical Character Recognition (OCR) systems have been widely used in various of application scenarios, such as office automation (OA) systems, factory automations, online educations, map productions etc. However, OCR is still a challenging task due to the various of text appearances and the demand of computational efficiency. In this paper, we propose a practical ultra lightweight OCR system, i.e., PP-OCR. The overall model size of the PP-OCR is only 3.5M for recognizing 6622 Chinese characters and 2.8M for recognizing 63 alphanumeric symbols, respectively. We introduce a bag of strategies to either enhance the model ability or reduce the model size. The corresponding ablation experiments with the real data are also provided. Meanwhile, several pre-trained models for the Chinese and English recognition are released, including a text detector (97K images are used), a direction classifier (600K images are used) as well as a text recognizer (17.9M images are used). Besides, the proposed PP-OCR are also verified in several other language recognition tasks, including French, Korean, Japanese and German. All of the above mentioned models are open-sourced and the codes are available in the GitHub repository, i.e., this https URL.</div></details></td>
        <td>-</td>
2894 2895
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.2/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
2896 2897 2898 2899
    </tr>
    <tr>
        <td>10</td>
        <td>ch_ppocr_server_v2.0_</br>det</td>
2900
        <td><a href="https://paperswithcode.com/paper/pp-ocr-a-practical-ultra-lightweight-ocr">PP-OCR: A Practical Ultra Lightweight OCR System</a></td>
2901 2902
        <td><details><summary>Abstract</summary><div>The Optical Character Recognition (OCR) systems have been widely used in various of application scenarios, such as office automation (OA) systems, factory automations, online educations, map productions etc. However, OCR is still a challenging task due to the various of text appearances and the demand of computational efficiency. In this paper, we propose a practical ultra lightweight OCR system, i.e., PP-OCR. The overall model size of the PP-OCR is only 3.5M for recognizing 6622 Chinese characters and 2.8M for recognizing 63 alphanumeric symbols, respectively. We introduce a bag of strategies to either enhance the model ability or reduce the model size. The corresponding ablation experiments with the real data are also provided. Meanwhile, several pre-trained models for the Chinese and English recognition are released, including a text detector (97K images are used), a direction classifier (600K images are used) as well as a text recognizer (17.9M images are used). Besides, the proposed PP-OCR are also verified in several other language recognition tasks, including French, Korean, Japanese and German. All of the above mentioned models are open-sourced and the codes are available in the GitHub repository, i.e., this https URL.</div></details></td>
        <td>-</td>
2903 2904
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.2/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
2905 2906 2907 2908
    </tr>
    <tr>
        <td>11</td>
        <td>ch_ppocr_server_v2.0_</br>rec</td>
2909
        <td><a href="https://paperswithcode.com/paper/pp-ocr-a-practical-ultra-lightweight-ocr">PP-OCR: A Practical Ultra Lightweight OCR System</a></td>
2910 2911
        <td><details><summary>Abstract</summary><div>The Optical Character Recognition (OCR) systems have been widely used in various of application scenarios, such as office automation (OA) systems, factory automations, online educations, map productions etc. However, OCR is still a challenging task due to the various of text appearances and the demand of computational efficiency. In this paper, we propose a practical ultra lightweight OCR system, i.e., PP-OCR. The overall model size of the PP-OCR is only 3.5M for recognizing 6622 Chinese characters and 2.8M for recognizing 63 alphanumeric symbols, respectively. We introduce a bag of strategies to either enhance the model ability or reduce the model size. The corresponding ablation experiments with the real data are also provided. Meanwhile, several pre-trained models for the Chinese and English recognition are released, including a text detector (97K images are used), a direction classifier (600K images are used) as well as a text recognizer (17.9M images are used). Besides, the proposed PP-OCR are also verified in several other language recognition tasks, including French, Korean, Japanese and German. All of the above mentioned models are open-sourced and the codes are available in the GitHub repository, i.e., this https URL.</div></details></td>
        <td>-</td>
2912 2913
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.2/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
2914 2915 2916 2917
    </tr>
    <tr>
        <td>12</td>
        <td>ch_ppocr_server_v2.0</td>
2918
        <td><a href="https://paperswithcode.com/paper/pp-ocr-a-practical-ultra-lightweight-ocr">PP-OCR: A Practical Ultra Lightweight OCR System</a></td>
2919 2920
        <td><details><summary>Abstract</summary><div>The Optical Character Recognition (OCR) systems have been widely used in various of application scenarios, such as office automation (OA) systems, factory automations, online educations, map productions etc. However, OCR is still a challenging task due to the various of text appearances and the demand of computational efficiency. In this paper, we propose a practical ultra lightweight OCR system, i.e., PP-OCR. The overall model size of the PP-OCR is only 3.5M for recognizing 6622 Chinese characters and 2.8M for recognizing 63 alphanumeric symbols, respectively. We introduce a bag of strategies to either enhance the model ability or reduce the model size. The corresponding ablation experiments with the real data are also provided. Meanwhile, several pre-trained models for the Chinese and English recognition are released, including a text detector (97K images are used), a direction classifier (600K images are used) as well as a text recognizer (17.9M images are used). Besides, the proposed PP-OCR are also verified in several other language recognition tasks, including French, Korean, Japanese and German. All of the above mentioned models are open-sourced and the codes are available in the GitHub repository, i.e., this https URL.</div></details></td>
        <td>-</td>
2921 2922
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.2/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
2923 2924 2925
    </tr>
    <tr>
        <td>13</td>
2926 2927
        <td>ch_PP-OCRv2_det    </td>
        <td><a href="https://paperswithcode.com/paper/pp-ocrv2-bag-of-tricks-for-ultra-lightweight">PP-OCRv2: Bag of Tricks for Ultra Lightweight OCR System</a></td>
2928 2929
        <td><details><summary>Abstract</summary><div>Optical Character Recognition (OCR) systems have been widely used in various of application scenarios. Designing an OCR system is still a challenging task. In previous work, we proposed a practical ultra lightweight OCR system (PP-OCR) to balance the accuracy against the efficiency. In order to improve the accuracy of PP-OCR and keep high efficiency, in this paper, we propose a more robust OCR system, i.e. PP-OCRv2. We introduce bag of tricks to train a better text detector and a better text recognizer, which include Collaborative Mutual Learning (CML), CopyPaste, Lightweight CPUNetwork (LCNet), Unified-Deep Mutual Learning (U-DML) and Enhanced CTCLoss. Experiments on real data show that the precision of PP-OCRv2 is 7% higher than PP-OCR under the same inference cost. It is also comparable to the server models of the PP-OCR which uses ResNet series as backbones. All of the above mentioned models are open-sourced and the code is available in the GitHub repository PaddleOCR which is powered by PaddlePaddle.</div></details></td>
        <td>-</td>
2930 2931
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
2932 2933 2934 2935
    </tr>
    <tr>
        <td>14</td>
        <td>ch_PP-OCRv2_det_PACT</td>
2936
        <td><a href="https://paperswithcode.com/paper/pp-ocrv2-bag-of-tricks-for-ultra-lightweight">PP-OCRv2: Bag of Tricks for Ultra Lightweight OCR System</a></td>
2937 2938
        <td><details><summary>Abstract</summary><div>Optical Character Recognition (OCR) systems have been widely used in various of application scenarios. Designing an OCR system is still a challenging task. In previous work, we proposed a practical ultra lightweight OCR system (PP-OCR) to balance the accuracy against the efficiency. In order to improve the accuracy of PP-OCR and keep high efficiency, in this paper, we propose a more robust OCR system, i.e. PP-OCRv2. We introduce bag of tricks to train a better text detector and a better text recognizer, which include Collaborative Mutual Learning (CML), CopyPaste, Lightweight CPUNetwork (LCNet), Unified-Deep Mutual Learning (U-DML) and Enhanced CTCLoss. Experiments on real data show that the precision of PP-OCRv2 is 7% higher than PP-OCR under the same inference cost. It is also comparable to the server models of the PP-OCR which uses ResNet series as backbones. All of the above mentioned models are open-sourced and the code is available in the GitHub repository PaddleOCR which is powered by PaddlePaddle.</div></details></td>
        <td>-</td>
2939 2940
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
2941 2942 2943 2944
    </tr>
    <tr>
        <td>15</td>
        <td>ch_PP-OCRv2_det_KL</td>
2945
        <td><a href="https://paperswithcode.com/paper/pp-ocrv2-bag-of-tricks-for-ultra-lightweight">PP-OCRv2: Bag of Tricks for Ultra Lightweight OCR System</a></td>
2946 2947
        <td><details><summary>Abstract</summary><div>Optical Character Recognition (OCR) systems have been widely used in various of application scenarios. Designing an OCR system is still a challenging task. In previous work, we proposed a practical ultra lightweight OCR system (PP-OCR) to balance the accuracy against the efficiency. In order to improve the accuracy of PP-OCR and keep high efficiency, in this paper, we propose a more robust OCR system, i.e. PP-OCRv2. We introduce bag of tricks to train a better text detector and a better text recognizer, which include Collaborative Mutual Learning (CML), CopyPaste, Lightweight CPUNetwork (LCNet), Unified-Deep Mutual Learning (U-DML) and Enhanced CTCLoss. Experiments on real data show that the precision of PP-OCRv2 is 7% higher than PP-OCR under the same inference cost. It is also comparable to the server models of the PP-OCR which uses ResNet series as backbones. All of the above mentioned models are open-sourced and the code is available in the GitHub repository PaddleOCR which is powered by PaddlePaddle.</div></details></td>
        <td>-</td>
2948 2949
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
2950 2951 2952 2953
    </tr>
    <tr>
        <td>16</td>
        <td>ch_PP-OCRv2_rec</td>
2954
        <td><a href="https://paperswithcode.com/paper/pp-ocrv2-bag-of-tricks-for-ultra-lightweight">PP-OCRv2: Bag of Tricks for Ultra Lightweight OCR System</a></td>
2955 2956
        <td><details><summary>Abstract</summary><div>Optical Character Recognition (OCR) systems have been widely used in various of application scenarios. Designing an OCR system is still a challenging task. In previous work, we proposed a practical ultra lightweight OCR system (PP-OCR) to balance the accuracy against the efficiency. In order to improve the accuracy of PP-OCR and keep high efficiency, in this paper, we propose a more robust OCR system, i.e. PP-OCRv2. We introduce bag of tricks to train a better text detector and a better text recognizer, which include Collaborative Mutual Learning (CML), CopyPaste, Lightweight CPUNetwork (LCNet), Unified-Deep Mutual Learning (U-DML) and Enhanced CTCLoss. Experiments on real data show that the precision of PP-OCRv2 is 7% higher than PP-OCR under the same inference cost. It is also comparable to the server models of the PP-OCR which uses ResNet series as backbones. All of the above mentioned models are open-sourced and the code is available in the GitHub repository PaddleOCR which is powered by PaddlePaddle.</div></details></td>
        <td>-</td>
2957 2958
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
2959 2960 2961 2962
    </tr>
    <tr>
        <td>17</td>
        <td>ch_PP-OCRv2_rec_PACT</td>
2963
        <td><a href="https://paperswithcode.com/paper/pp-ocrv2-bag-of-tricks-for-ultra-lightweight">PP-OCRv2: Bag of Tricks for Ultra Lightweight OCR System</a></td>
2964 2965
        <td><details><summary>Abstract</summary><div>Optical Character Recognition (OCR) systems have been widely used in various of application scenarios. Designing an OCR system is still a challenging task. In previous work, we proposed a practical ultra lightweight OCR system (PP-OCR) to balance the accuracy against the efficiency. In order to improve the accuracy of PP-OCR and keep high efficiency, in this paper, we propose a more robust OCR system, i.e. PP-OCRv2. We introduce bag of tricks to train a better text detector and a better text recognizer, which include Collaborative Mutual Learning (CML), CopyPaste, Lightweight CPUNetwork (LCNet), Unified-Deep Mutual Learning (U-DML) and Enhanced CTCLoss. Experiments on real data show that the precision of PP-OCRv2 is 7% higher than PP-OCR under the same inference cost. It is also comparable to the server models of the PP-OCR which uses ResNet series as backbones. All of the above mentioned models are open-sourced and the code is available in the GitHub repository PaddleOCR which is powered by PaddlePaddle.</div></details></td>
        <td>-</td>
2966 2967
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
2968 2969 2970 2971
    </tr>
    <tr>
        <td>18</td>
        <td>ch_PP-OCRv2_rec_KL</td>
2972
        <td><a href="https://paperswithcode.com/paper/pp-ocrv2-bag-of-tricks-for-ultra-lightweight">PP-OCRv2: Bag of Tricks for Ultra Lightweight OCR System</a></td>
2973 2974
        <td><details><summary>Abstract</summary><div>Optical Character Recognition (OCR) systems have been widely used in various of application scenarios. Designing an OCR system is still a challenging task. In previous work, we proposed a practical ultra lightweight OCR system (PP-OCR) to balance the accuracy against the efficiency. In order to improve the accuracy of PP-OCR and keep high efficiency, in this paper, we propose a more robust OCR system, i.e. PP-OCRv2. We introduce bag of tricks to train a better text detector and a better text recognizer, which include Collaborative Mutual Learning (CML), CopyPaste, Lightweight CPUNetwork (LCNet), Unified-Deep Mutual Learning (U-DML) and Enhanced CTCLoss. Experiments on real data show that the precision of PP-OCRv2 is 7% higher than PP-OCR under the same inference cost. It is also comparable to the server models of the PP-OCR which uses ResNet series as backbones. All of the above mentioned models are open-sourced and the code is available in the GitHub repository PaddleOCR which is powered by PaddlePaddle.</div></details></td>
        <td>-</td>
2975 2976
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
2977 2978 2979 2980
    </tr>
    <tr>
        <td>19</td>
        <td>ch_PP-OCRv2</td>
2981
        <td><a href="https://paperswithcode.com/paper/pp-ocrv2-bag-of-tricks-for-ultra-lightweight">PP-OCRv2: Bag of Tricks for Ultra Lightweight OCR System</a></td>
2982 2983
        <td><details><summary>Abstract</summary><div>Optical Character Recognition (OCR) systems have been widely used in various of application scenarios. Designing an OCR system is still a challenging task. In previous work, we proposed a practical ultra lightweight OCR system (PP-OCR) to balance the accuracy against the efficiency. In order to improve the accuracy of PP-OCR and keep high efficiency, in this paper, we propose a more robust OCR system, i.e. PP-OCRv2. We introduce bag of tricks to train a better text detector and a better text recognizer, which include Collaborative Mutual Learning (CML), CopyPaste, Lightweight CPUNetwork (LCNet), Unified-Deep Mutual Learning (U-DML) and Enhanced CTCLoss. Experiments on real data show that the precision of PP-OCRv2 is 7% higher than PP-OCR under the same inference cost. It is also comparable to the server models of the PP-OCR which uses ResNet series as backbones. All of the above mentioned models are open-sourced and the code is available in the GitHub repository PaddleOCR which is powered by PaddlePaddle.</div></details></td>
        <td>-</td>
2984 2985
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
2986 2987 2988 2989
    </tr>
    <tr>
        <td>20</td>
        <td>det_mv3_db_v2.0</td>
2990
        <td><a href="https://paperswithcode.com/paper/real-time-scene-text-detection-with">Real-time Scene Text Detection with Differentiable Binarization</a></td>
2991 2992
        <td><details><summary>Abstract</summary><div>Recently, segmentation-based methods are quite popular in scene text detection, as the segmentation results can more accurately describe scene text of various shapes such as curve text. However, the post-processing of binarization is essential for segmentation-based detection, which converts probability maps produced by a segmentation method into bounding boxes/regions of text. In this paper, we propose a module named Differentiable Binarization (DB), which can perform the binarization process in a segmentation network. Optimized along with a DB module, a segmentation network can adaptively set the thresholds for binarization, which not only simplifies the post-processing but also enhances the performance of text detection. Based on a simple segmentation network, we validate the performance improvements of DB on five benchmark datasets, which consistently achieves state-of-the-art results, in terms of both detection accuracy and speed. In particular, with a light-weight backbone, the performance improvements by DB are significant so that we can look for an ideal tradeoff between detection accuracy and efficiency. Specifically, with a backbone of ResNet-18, our detector achieves an F-measure of 82.8, running at 62 FPS, on the MSRA-TD500 dataset. Code is available at: this https URL</div></details></td>
        <td>icdar2015 / hmean / 7</br>5.12%</td>
2993 2994
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
2995 2996 2997 2998
    </tr>
    <tr>
        <td>21</td>
        <td>det_r50_vd_db_v2.0</td>
2999
        <td><a href="https://paperswithcode.com/paper/real-time-scene-text-detection-with">Real-time Scene Text Detection with Differentiable Binarization</a></td>
3000 3001
        <td><details><summary>Abstract</summary><div>Recently, segmentation-based methods are quite popular in scene text detection, as the segmentation results can more accurately describe scene text of various shapes such as curve text. However, the post-processing of binarization is essential for segmentation-based detection, which converts probability maps produced by a segmentation method into bounding boxes/regions of text. In this paper, we propose a module named Differentiable Binarization (DB), which can perform the binarization process in a segmentation network. Optimized along with a DB module, a segmentation network can adaptively set the thresholds for binarization, which not only simplifies the post-processing but also enhances the performance of text detection. Based on a simple segmentation network, we validate the performance improvements of DB on five benchmark datasets, which consistently achieves state-of-the-art results, in terms of both detection accuracy and speed. In particular, with a light-weight backbone, the performance improvements by DB are significant so that we can look for an ideal tradeoff between detection accuracy and efficiency. Specifically, with a backbone of ResNet-18, our detector achieves an F-measure of 82.8, running at 62 FPS, on the MSRA-TD500 dataset. Code is available at: this https URL</div></details></td>
        <td>icdar2015 / hmean / 8</br>2.38%</td>
3002 3003
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
3004 3005 3006 3007
    </tr>
    <tr>
        <td>22</td>
        <td>det_mv3_east_v2.0</td>
3008
        <td><a href="https://paperswithcode.com/paper/east-an-efficient-and-accurate-scene-text">EAST: an efficient and accurate scene text detector</a></td>
3009 3010
        <td><details><summary>Abstract</summary><div>Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution.</div></details></td>
        <td>icdar2015 / hmean / 8</br>0.03%</td>
3011 3012
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
3013 3014 3015 3016
    </tr>
    <tr>
        <td>23</td>
        <td>det_r50_vd_east_v2.0</td>
3017
        <td><a href="https://paperswithcode.com/paper/east-an-efficient-and-accurate-scene-text">EAST: an efficient and accurate scene text detector</a></td>
3018 3019
        <td><details><summary>Abstract</summary><div>Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution.</div></details></td>
        <td>icdar2015 / hmean / 8</br>6.25%</td>
3020 3021
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
3022 3023 3024 3025
    </tr>
    <tr>
        <td>24</td>
        <td>det_r50_vd_sast_icdar</br>15_v2.0</td>
3026
        <td><a href="https://paperswithcode.com/paper/a-single-shot-arbitrarily-shaped-text">A Single-Shot Arbitrarily-Shaped Text Detector based on Context Attended Multi-Task Learning</a></td>
3027 3028
        <td><details><summary>Abstract</summary><div>Detecting scene text of arbitrary shapes has been a challenging task over the past years. In this paper, we propose a novel segmentation-based text detector, namely SAST, which employs a context attended multi-task learning framework based on a Fully Convolutional Network (FCN) to learn various geometric properties for the reconstruction of polygonal representation of text regions. Taking sequential characteristics of text into consideration, a Context Attention Block is introduced to capture long-range dependencies of pixel information to obtain a more reliable segmentation. In post-processing, a Point-to-Quad assignment method is proposed to cluster pixels into text instances by integrating both high-level object knowledge and low-level pixel information in a single shot. Moreover, the polygonal representation of arbitrarily-shaped text can be extracted with the proposed geometric properties much more effectively. Experiments on several benchmarks, including ICDAR2015, ICDAR2017-MLT, SCUT-CTW1500, and Total-Text, demonstrate that SAST achieves better or comparable performance in terms of accuracy. Furthermore, the proposed algorithm runs at 27.63 FPS on SCUT-CTW1500 with a Hmean of 81.0% on a single NVIDIA Titan Xp graphics card, surpassing most of the existing segmentation-based methods.</div></details></td>
        <td>icdar2015 / hmean / 8</br>7.42%</td>
3029 3030
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
3031 3032 3033 3034
    </tr>
    <tr>
        <td>25</td>
        <td>det_r50_vd_sast_total</br>text_v2.0</td>
3035
        <td><a href="https://paperswithcode.com/paper/a-single-shot-arbitrarily-shaped-text">A Single-Shot Arbitrarily-Shaped Text Detector based on Context Attended Multi-Task Learning</a></td>
3036 3037
        <td><details><summary>Abstract</summary><div>Detecting scene text of arbitrary shapes has been a challenging task over the past years. In this paper, we propose a novel segmentation-based text detector, namely SAST, which employs a context attended multi-task learning framework based on a Fully Convolutional Network (FCN) to learn various geometric properties for the reconstruction of polygonal representation of text regions. Taking sequential characteristics of text into consideration, a Context Attention Block is introduced to capture long-range dependencies of pixel information to obtain a more reliable segmentation. In post-processing, a Point-to-Quad assignment method is proposed to cluster pixels into text instances by integrating both high-level object knowledge and low-level pixel information in a single shot. Moreover, the polygonal representation of arbitrarily-shaped text can be extracted with the proposed geometric properties much more effectively. Experiments on several benchmarks, including ICDAR2015, ICDAR2017-MLT, SCUT-CTW1500, and Total-Text, demonstrate that SAST achieves better or comparable performance in terms of accuracy. Furthermore, the proposed algorithm runs at 27.63 FPS on SCUT-CTW1500 with a Hmean of 81.0% on a single NVIDIA Titan Xp graphics card, surpassing most of the existing segmentation-based methods.</div></details></td>
        <td>total-text / hmean / </br>83.66%</td>
3038 3039
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
3040 3041 3042 3043
    </tr>
    <tr>
        <td>26</td>
        <td>det_r50_vd_pse_v2.0</td>
3044
        <td><a href="https://paperswithcode.com/paper/shape-robust-text-detection-with-progressive-1">Shape Robust Text Detection with Progressive Scale Expansion Network</a></td>
3045 3046
        <td><details><summary>Abstract</summary><div>Scene text detection has witnessed rapid progress especially with the recent development of convolutional neural networks. However, there still exists two challenges which prevent the algorithm into industry applications. On the one hand, most of the state-of-art algorithms require quadrangle bounding box which is in-accurate to locate the texts with arbitrary shape. On the other hand, two text instances which are close to each other may lead to a false detection which covers both instances. Traditionally, the segmentation-based approach can relieve the first problem but usually fail to solve the second challenge. To address these two challenges, in this paper, we propose a novel Progressive Scale Expansion Network (PSENet), which can precisely detect text instances with arbitrary shapes. More specifically, PSENet generates the different scale of kernels for each text instance, and gradually expands the minimal scale kernel to the text instance with the complete shape. Due to the fact that there are large geometrical margins among the minimal scale kernels, our method is effective to split the close text instances, making it easier to use segmentation-based methods to detect arbitrary-shaped text instances. Extensive experiments on CTW1500, Total-Text, ICDAR 2015 and ICDAR 2017 MLT validate the effectiveness of PSENet. Notably, on CTW1500, a dataset full of long curve texts, PSENet achieves a F-measure of 74.3% at 27 FPS, and our best F-measure (82.2%) outperforms state-of-art algorithms by 6.6%. </div></details></td>
        <td>icdar2015 / hmean / 8</br>2.55%</td>
3047 3048
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
3049 3050 3051 3052
    </tr>
    <tr>
        <td>27</td>
        <td>det_mv3_pse_v2.0</td>
3053
        <td><a href="https://paperswithcode.com/paper/shape-robust-text-detection-with-progressive-1">Shape Robust Text Detection with Progressive Scale Expansion Network</a></td>
3054 3055
        <td><details><summary>Abstract</summary><div>Scene text detection has witnessed rapid progress especially with the recent development of convolutional neural networks. However, there still exists two challenges which prevent the algorithm into industry applications. On the one hand, most of the state-of-art algorithms require quadrangle bounding box which is in-accurate to locate the texts with arbitrary shape. On the other hand, two text instances which are close to each other may lead to a false detection which covers both instances. Traditionally, the segmentation-based approach can relieve the first problem but usually fail to solve the second challenge. To address these two challenges, in this paper, we propose a novel Progressive Scale Expansion Network (PSENet), which can precisely detect text instances with arbitrary shapes. More specifically, PSENet generates the different scale of kernels for each text instance, and gradually expands the minimal scale kernel to the text instance with the complete shape. Due to the fact that there are large geometrical margins among the minimal scale kernels, our method is effective to split the close text instances, making it easier to use segmentation-based methods to detect arbitrary-shaped text instances. Extensive experiments on CTW1500, Total-Text, ICDAR 2015 and ICDAR 2017 MLT validate the effectiveness of PSENet. Notably, on CTW1500, a dataset full of long curve texts, PSENet achieves a F-measure of 74.3% at 27 FPS, and our best F-measure (82.2%) outperforms state-of-art algorithms by 6.6%. </div></details></td>
        <td>icdar2015 / hmean / 7</br>5.89%</td>
3056 3057
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
3058 3059 3060 3061
    </tr>
    <tr>
        <td>28</td>
        <td>rec_mv3_none_bilstm_c</br>tc_v2.0</td>
3062
        <td><a href="https://paperswithcode.com/paper/what-is-wrong-with-scene-text-recognition">What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis</a></td>
3063 3064
        <td><details><summary>Abstract</summary><div>Many new proposals for scene text recognition (STR) models have been introduced in recent years. While each claim to have pushed the boundary of the technology, a holistic and fair comparison has been largely missing in the field due to the inconsistent choices of training and evaluation datasets. This paper addresses this difficulty with three major contributions. First, we examine the inconsistencies of training and evaluation datasets, and the performance gap results from inconsistencies. Second, we introduce a unified four-stage STR framework that most existing STR models fit into. Using this framework allows for the extensive evaluation of previously proposed STR modules and the discovery of previously unexplored module combinations. Third, we analyze the module-wise contributions to performance in terms of accuracy, speed, and memory demand, under one consistent set of training and evaluation datasets. Such analyses clean up the hindrance on the current comparisons to understand the performance gain of the existing modules.</div></details></td>
        <td>IIIT, SVT, IC03, IC13</br>, IC15, SVTP, CUTE / avg_acc / 79.97%</td>
3065 3066
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
3067 3068 3069 3070
    </tr>
    <tr>
        <td>29</td>
        <td>rec_r34_vd_none_bilst</br>m_ctc_v2.0</td>
3071
        <td><a href="https://paperswithcode.com/paper/what-is-wrong-with-scene-text-recognition">What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis</a></td>
3072 3073
        <td><details><summary>Abstract</summary><div>Many new proposals for scene text recognition (STR) models have been introduced in recent years. While each claim to have pushed the boundary of the technology, a holistic and fair comparison has been largely missing in the field due to the inconsistent choices of training and evaluation datasets. This paper addresses this difficulty with three major contributions. First, we examine the inconsistencies of training and evaluation datasets, and the performance gap results from inconsistencies. Second, we introduce a unified four-stage STR framework that most existing STR models fit into. Using this framework allows for the extensive evaluation of previously proposed STR modules and the discovery of previously unexplored module combinations. Third, we analyze the module-wise contributions to performance in terms of accuracy, speed, and memory demand, under one consistent set of training and evaluation datasets. Such analyses clean up the hindrance on the current comparisons to understand the performance gain of the existing modules.</div></details></td>
        <td>IIIT, SVT, IC03, IC13</br>, IC15, SVTP, CUTE / avg_acc / 82.76%</td>
3074 3075
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
3076 3077 3078 3079
    </tr>
    <tr>
        <td>30</td>
        <td>rec_mv3_none_none_ctc</br>_v2.0</td>
3080
        <td><a href="https://paperswithcode.com/paper/what-is-wrong-with-scene-text-recognition">What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis</a></td>
3081 3082
        <td><details><summary>Abstract</summary><div>Many new proposals for scene text recognition (STR) models have been introduced in recent years. While each claim to have pushed the boundary of the technology, a holistic and fair comparison has been largely missing in the field due to the inconsistent choices of training and evaluation datasets. This paper addresses this difficulty with three major contributions. First, we examine the inconsistencies of training and evaluation datasets, and the performance gap results from inconsistencies. Second, we introduce a unified four-stage STR framework that most existing STR models fit into. Using this framework allows for the extensive evaluation of previously proposed STR modules and the discovery of previously unexplored module combinations. Third, we analyze the module-wise contributions to performance in terms of accuracy, speed, and memory demand, under one consistent set of training and evaluation datasets. Such analyses clean up the hindrance on the current comparisons to understand the performance gain of the existing modules.</div></details></td>
        <td>IIIT, SVT, IC03, IC13</br>, IC15, SVTP, CUTE / avg_acc / 78.05%</td>
3083 3084
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
3085 3086 3087 3088
    </tr>
    <tr>
        <td>31</td>
        <td>rec_r34_vd_none_none_</br>ctc_v2.0</td>
3089
        <td><a href="https://paperswithcode.com/paper/what-is-wrong-with-scene-text-recognition">What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis</a></td>
3090 3091
        <td><details><summary>Abstract</summary><div>Many new proposals for scene text recognition (STR) models have been introduced in recent years. While each claim to have pushed the boundary of the technology, a holistic and fair comparison has been largely missing in the field due to the inconsistent choices of training and evaluation datasets. This paper addresses this difficulty with three major contributions. First, we examine the inconsistencies of training and evaluation datasets, and the performance gap results from inconsistencies. Second, we introduce a unified four-stage STR framework that most existing STR models fit into. Using this framework allows for the extensive evaluation of previously proposed STR modules and the discovery of previously unexplored module combinations. Third, we analyze the module-wise contributions to performance in terms of accuracy, speed, and memory demand, under one consistent set of training and evaluation datasets. Such analyses clean up the hindrance on the current comparisons to understand the performance gain of the existing modules.</div></details></td>
        <td>IIIT, SVT, IC03, IC13</br>, IC15, SVTP, CUTE / avg_acc / 80.9%</td>
3092 3093
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
3094 3095 3096 3097
    </tr>
    <tr>
        <td>32</td>
        <td>rec_mv3_tps_bilstm_at</br>t_v2.0</td>
3098
        <td><a href="https://paperswithcode.com/paper/what-is-wrong-with-scene-text-recognition">What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis</a></td>
3099 3100
        <td><details><summary>Abstract</summary><div>Many new proposals for scene text recognition (STR) models have been introduced in recent years. While each claim to have pushed the boundary of the technology, a holistic and fair comparison has been largely missing in the field due to the inconsistent choices of training and evaluation datasets. This paper addresses this difficulty with three major contributions. First, we examine the inconsistencies of training and evaluation datasets, and the performance gap results from inconsistencies. Second, we introduce a unified four-stage STR framework that most existing STR models fit into. Using this framework allows for the extensive evaluation of previously proposed STR modules and the discovery of previously unexplored module combinations. Third, we analyze the module-wise contributions to performance in terms of accuracy, speed, and memory demand, under one consistent set of training and evaluation datasets. Such analyses clean up the hindrance on the current comparisons to understand the performance gain of the existing modules.</div></details></td>
        <td>IIIT, SVT, IC03, IC13</br>, IC15, SVTP, CUTE / avg_acc / 82.5%</td>
3101 3102
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
3103 3104 3105 3106
    </tr>
    <tr>
        <td>33</td>
        <td>rec_r34_vd_tps_bilstm</br>_att_v2.0</td>
3107
        <td><a href="https://paperswithcode.com/paper/what-is-wrong-with-scene-text-recognition">What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis</a></td>
3108 3109
        <td><details><summary>Abstract</summary><div>Many new proposals for scene text recognition (STR) models have been introduced in recent years. While each claim to have pushed the boundary of the technology, a holistic and fair comparison has been largely missing in the field due to the inconsistent choices of training and evaluation datasets. This paper addresses this difficulty with three major contributions. First, we examine the inconsistencies of training and evaluation datasets, and the performance gap results from inconsistencies. Second, we introduce a unified four-stage STR framework that most existing STR models fit into. Using this framework allows for the extensive evaluation of previously proposed STR modules and the discovery of previously unexplored module combinations. Third, we analyze the module-wise contributions to performance in terms of accuracy, speed, and memory demand, under one consistent set of training and evaluation datasets. Such analyses clean up the hindrance on the current comparisons to understand the performance gain of the existing modules.</div></details></td>
        <td>IIIT, SVT, IC03, IC13</br>, IC15, SVTP, CUTE / avg_acc / 83.6%</td>
3110 3111
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
3112 3113 3114 3115
    </tr>
    <tr>
        <td>34</td>
        <td>rec_mv3_tps_bilstm_ct</br>c_v2.0</td>
3116
        <td><a href="https://paperswithcode.com/paper/what-is-wrong-with-scene-text-recognition">What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis</a></td>
3117 3118
        <td><details><summary>Abstract</summary><div>Many new proposals for scene text recognition (STR) models have been introduced in recent years. While each claim to have pushed the boundary of the technology, a holistic and fair comparison has been largely missing in the field due to the inconsistent choices of training and evaluation datasets. This paper addresses this difficulty with three major contributions. First, we examine the inconsistencies of training and evaluation datasets, and the performance gap results from inconsistencies. Second, we introduce a unified four-stage STR framework that most existing STR models fit into. Using this framework allows for the extensive evaluation of previously proposed STR modules and the discovery of previously unexplored module combinations. Third, we analyze the module-wise contributions to performance in terms of accuracy, speed, and memory demand, under one consistent set of training and evaluation datasets. Such analyses clean up the hindrance on the current comparisons to understand the performance gain of the existing modules.</div></details></td>
        <td>IIIT, SVT, IC03, IC13</br>, IC15, SVTP, CUTE / avg_acc / 81.42%</td>
3119 3120
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
3121 3122 3123 3124
    </tr>
    <tr>
        <td>35</td>
        <td>rec_r34_vd_tps_bilstm</br>_ctc_v2.0</td>
3125
        <td><a href="https://paperswithcode.com/paper/what-is-wrong-with-scene-text-recognition">What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis</a></td>
3126 3127
        <td><details><summary>Abstract</summary><div>Many new proposals for scene text recognition (STR) models have been introduced in recent years. While each claim to have pushed the boundary of the technology, a holistic and fair comparison has been largely missing in the field due to the inconsistent choices of training and evaluation datasets. This paper addresses this difficulty with three major contributions. First, we examine the inconsistencies of training and evaluation datasets, and the performance gap results from inconsistencies. Second, we introduce a unified four-stage STR framework that most existing STR models fit into. Using this framework allows for the extensive evaluation of previously proposed STR modules and the discovery of previously unexplored module combinations. Third, we analyze the module-wise contributions to performance in terms of accuracy, speed, and memory demand, under one consistent set of training and evaluation datasets. Such analyses clean up the hindrance on the current comparisons to understand the performance gain of the existing modules.</div></details></td>
        <td>IIIT, SVT, IC03, IC13</br>, IC15, SVTP, CUTE / avg_acc / 84.44%</td>
3128 3129
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
3130 3131 3132
    </tr>
    <tr>
        <td>36</td>
3133 3134
        <td>rec_r50_fpn_vd_none_s</br>rn</td>
        <td><a href="https://paperswithcode.com/paper/towards-accurate-scene-text-recognition-with">Towards Accurate Scene Text Recognition with Semantic Reasoning Networks</a></td>
3135 3136
        <td><details><summary>Abstract</summary><div>Scene text image contains two levels of contents: visual texture and semantic information. Although the previous scene text recognition methods have made great progress over the past few years, the research on mining semantic information to assist text recognition attracts less attention, only RNN-like structures are explored to implicitly model semantic information. However, we observe that RNN based methods have some obvious shortcomings, such as time-dependent decoding manner and one-way serial transmission of semantic context, which greatly limit the help of semantic information and the computation efficiency. To mitigate these limitations, we propose a novel end-to-end trainable framework named semantic reasoning network (SRN) for accurate scene text recognition, where a global semantic reasoning module (GSRM) is introduced to capture global semantic context through multi-way parallel transmission. The state-of-the-art results on 7 public benchmarks, including regular text, irregular text and non-Latin long text, verify the effectiveness and robustness of the proposed method. In addition, the speed of SRN has significant advantages over the RNN based methods, demonstrating its value in practical use.</div></details></td>
        <td>IIIT, SVT, IC03, IC13</br>, IC15, SVTP, CUTE / avg_acc / 88.52%</td>
3137 3138
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
3139 3140 3141 3142
    </tr>
    <tr>
        <td>37</td>
        <td>rec_mtb_nrtr</td>
3143
        <td><a href="https://paperswithcode.com/paper/nrtr-a-no-recurrence-sequence-to-sequence">NRTR: A No-Recurrence Sequence-to-Sequence Model For Scene Text Recognition</a></td>
3144 3145
        <td><details><summary>Abstract</summary><div>Scene text recognition has attracted a great many researches due to its importance to various applications. Existing methods mainly adopt recurrence or convolution based networks. Though have obtained good performance, these methods still suffer from two limitations: slow training speed due to the internal recurrence of RNNs, and high complexity due to stacked convolutional layers for long-term feature extraction. This paper, for the first time, proposes a no-recurrence sequence-to-sequence text recognizer, named NRTR, that dispenses with recurrences and convolutions entirely. NRTR follows the encoder-decoder paradigm, where the encoder uses stacked self-attention to extract image features, and the decoder applies stacked self-attention to recognize texts based on encoder output. NRTR relies solely on self-attention mechanism thus could be trained with more parallelization and less complexity. Considering scene image has large variation in text and background, we further design a modality-transform block to effectively transform 2D input images to 1D sequences, combined with the encoder to extract more discriminative features. NRTR achieves state-of-the-art or highly competitive performance on both regular and irregular benchmarks, while requires only a small fraction of training time compared to the best model from the literature (at least 8 times faster). </div></details></td>
        <td>IIIT, SVT, IC03, IC13</br>, IC15, SVTP, CUTE / avg_acc / 84.3%</td>
3146 3147
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
3148 3149 3150 3151
    </tr>
    <tr>
        <td>38</td>
        <td>rec_r31_sar</td>
3152
        <td><a href="https://paperswithcode.com/paper/show-attend-and-read-a-simple-and-strong">Show, Attend and Read: A Simple and Strong Baseline for Irregular Text Recognition</a></td>
3153 3154
        <td><details><summary>Abstract</summary><div>Recognizing irregular text in natural scene images is challenging due to the large variance in text appearance, such as curvature, orientation and distortion. Most existing approaches rely heavily on sophisticated model designs and/or extra fine-grained annotations, which, to some extent, increase the difficulty in algorithm implementation and data collection. In this work, we propose an easy-to-implement strong baseline for irregular scene text recognition, using off-the-shelf neural network components and only word-level annotations. It is composed of a -layer ResNet, an LSTM-based encoder-decoder framework and a 2-dimensional attention module. Despite its simplicity, the proposed method is robust and achieves state-of-the-art performance on both regular and irregular scene text recognition benchmarks. </div></details></td>
        <td>IIIT, SVT, IC03, IC13</br>, IC15, SVTP, CUTE / avg_acc / 87.2%</td>
3155 3156
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
3157 3158 3159 3160
    </tr>
    <tr>
        <td>39</td>
        <td>rec_resnet_stn_bilstm</br>_att</td>
3161
        <td><a href="https://paperswithcode.com/paper/seed-semantics-enhanced-encoder-decoder">SEED: Semantics Enhanced Encoder-Decoder Framework for Scene TextRecognition</a></td>
3162 3163
        <td><details><summary>Abstract</summary><div>Scene text recognition is a hot research topic in computer vision. Recently, many recognition methods based on the encoder-decoder framework have been proposed, and they can handle scene texts of perspective distortion and curve shape. Nevertheless, they still face lots of challenges like image blur, uneven illumination, and incomplete characters. We argue that most encoder-decoder methods are based on local visual features without explicit global semantic information. In this work, we propose a semantics enhanced encoder-decoder framework to robustly recognize low-quality scene texts. The semantic information is used both in the encoder module for supervision and in the decoder module for initializing. In particular, the state-of-the art ASTER method is integrated into the proposed framework as an exemplar. Extensive experiments demonstrate that the proposed framework is more robust for low-quality text images, and achieves state-of-the-art results on several benchmark datasets. </div></details></td>
        <td>IIIT, SVT, IC03, IC13</br>, IC15, SVTP, CUTE / avg_acc / 85.2%</td>
3164 3165
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_en/algorithm_overview_en.md">快速开始</a></td>
        </td>
3166 3167 3168 3169
    </tr>
    <tr>
        <td>40</td>
        <td>en_server_pgnetA</td>
3170
        <td><a href="https://paperswithcode.com/paper/pgnet-real-time-arbitrarily-shaped-text">PGNet: Real-time Arbitrarily-Shaped Text Spottingwith Point Gathering Network</a></td>
3171 3172
        <td><details><summary>Abstract</summary><div>The reading of arbitrarily-shaped text has received increasing research attention. However, existing text spotters are mostly built on two-stage frameworks or character-based methods, which suffer from either Non-Maximum Suppression (NMS), Region-of-Interest (RoI) operations, or character-level annotations. In this paper, to address the above problems, we propose a novel fully convolutional Point Gathering Network (PGNet) for reading arbitrarily-shaped text in real-time. The PGNet is a single-shot text spotter, where the pixel-level character classification map is learned with proposed PG-CTC loss avoiding the usage of character-level annotations. With a PG-CTC decoder, we gather high-level character classification vectors from two-dimensional space and decode them into text symbols without NMS and RoI operations involved, which guarantees high efficiency. Additionally, reasoning the relations between each character and its neighbors, a graph refinement module (GRM) is proposed to optimize the coarse recognition and improve the end-to-end performance. Experiments prove that the proposed method achieves competitive accuracy, meanwhile significantly improving the running speed. In particular, in Total-Text, it runs at 46.7 FPS, surpassing the previous spotters with a large margin.</div></details></td>
        <td>total-text / e2e_f_sc</br>ore / 60.03%</td>
3173 3174
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.1/doc/doc_en/pgnet_en.md">快速开始</a></td>
        </td>
3175 3176 3177
    </tr>
    <tr>
        <td>41</td>
3178 3179
        <td>layoutxlm_ser</td>
        <td><a href="https://paperswithcode.com/paper/layoutparser-a-unified-toolkit-for-deep">LayoutParser: A Unified Toolkit for DeepLearning Based Document Image Analysis</a></td>
3180 3181
        <td><details><summary>Abstract</summary><div>Recent advances in document image analysis (DIA) have been primarily driven by the application of neural networks. Ideally, research outcomes could be easily deployed in production and extended for further investigation. However, various factors like loosely organized codebases and sophisticated model configurations complicate the easy reuse of important innovations by a wide audience. Though there have been on-going efforts to improve reusability and simplify deep learning (DL) model development in disciplines like natural language processing and computer vision, none of them are optimized for challenges in the domain of DIA. This represents a major gap in the existing toolkit, as DIA is central to academic research across a wide range of disciplines in the social sciences and humanities. This paper introduces layoutparser, an open-source library for streamlining the usage of DL in DIA research and applications. The core layoutparser library comes with a set of simple and intuitive interfaces for applying and customizing DL models for layout detection, character recognition, and many other document processing tasks. To promote extensibility, layoutparser also incorporates a community platform for sharing both pre-trained models and full document digitization pipelines. We demonstrate that layoutparser is helpful for both lightweight and large-scale digitization pipelines in real-word use cases. The library is publicly available at https://layout-parser.github.io/.</div></details></td>
        <td>PubLayNet / mAP / 93.</br>6%</td>
3182 3183
        <td><a href="">快速开始</a></td>
        </td>
3184 3185 3186
    </tr>
    <tr>
        <td>42</td>
3187 3188 3189 3190 3191 3192
        <td>det_r50_dcn_fce_ctw_v</br>2.0</td>
        <td><a href="https://paperswithcode.com/paper/fourier-contour-embedding-for-arbitrary">Fourier Contour Embedding for Arbitrary-Shaped Text Detection</a></td>
        <td><details><summary>Abstract</summary><div>One of the main challenges for arbitrary-shaped text detection is to design a good text instance representation that allows networks to learn diverse text geometry variances. Most of existing methods model text instances in image spatial domain via masks or contour point sequences in the Cartesian or the polar coordinate system. However, the mask representation might lead to expensive post-processing, while the point sequence one may have limited capability to model texts with highly-curved shapes. To tackle these problems, we model text instances in the Fourier domain and propose one novel Fourier Contour Embedding (FCE) method to represent arbitrary shaped text contours as compact signatures. We further construct FCENet with a backbone, feature pyramid networks (FPN) and a simple post-processing with the Inverse Fourier Transformation (IFT) and Non-Maximum Suppression (NMS). Different from previous methods, FCENet first predicts compact Fourier signatures of text instances, and then reconstructs text contours via IFT and NMS during test. Extensive experiments demonstrate that FCE is accurate and robust to fit contours of scene texts even with highly-curved shapes, and also validate the effectiveness and the good generalization of FCENet for arbitrary-shaped text detection. Furthermore, experimental results show that our FCENet is superior to the state-of-the-art (SOTA) methods on CTW1500 and Total-Text, especially on challenging highly-curved text subset.</div></details></td>
        <td>CTW1500 / hmean / 85.</br>27%</td>
        <td><a href="">快速开始</a></td>
        </td>
3193 3194 3195
    </tr>
    <tr>
        <td>43</td>
3196 3197 3198 3199 3200 3201
        <td>ch_PP-OCRv3_det    </td>
        <td><a href="https://paperswithcode.com/paper/pp-ocrv3-more-attempts-for-the-improvement-of">PP-OCRv3: More Attempts for the Improvement of Ultra Lightweight OCR System</a></td>
        <td><details><summary>Abstract</summary><div>Optical character recognition (OCR) technology has been widely used in various scenes, as shown in Figure 1. Designing a practical OCR system is still a meaningful but challenging task. In previous work, considering the efficiency and accuracy, we proposed a practical ultra lightweight OCR system (PP-OCR), and an optimized version PP-OCRv2. In order to further improve the performance of PP-OCRv2, a more robust OCR system PP-OCRv3 is proposed in this paper. PP-OCRv3 upgrades the text detection model and text recognition model in 9 aspects based on PP-OCRv2. For text detector, we introduce a PAN module with large receptive field named LK-PAN, a FPN module with residual attention mechanism named RSE-FPN, and DML distillation strategy. For text recognizer, the base model is replaced from CRNN to SVTR, and we introduce lightweight text recognition network SVTR LCNet, guided training of CTC by attention, data augmentation strategy TextConAug, better pre-trained model by self-supervised TextRotNet, UDML, and UIM to accelerate the model and improve the effect. Experiments on real data show that the hmean of PP-OCRv3 is 5% higher than PP-OCRv2 under comparable inference speed. All the above mentioned models are open-sourced and the code is available in the GitHub repository PaddleOCR which is powered by PaddlePaddle.</div></details></td>
        <td>-</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
3202 3203 3204
    </tr>
    <tr>
        <td>44</td>
3205 3206 3207 3208 3209 3210
        <td>ch_PP-OCRv3_det_PACT</td>
        <td><a href="https://paperswithcode.com/paper/pp-ocrv3-more-attempts-for-the-improvement-of">PP-OCRv3: More Attempts for the Improvement of Ultra Lightweight OCR System</a></td>
        <td><details><summary>Abstract</summary><div>Optical character recognition (OCR) technology has been widely used in various scenes, as shown in Figure 1. Designing a practical OCR system is still a meaningful but challenging task. In previous work, considering the efficiency and accuracy, we proposed a practical ultra lightweight OCR system (PP-OCR), and an optimized version PP-OCRv2. In order to further improve the performance of PP-OCRv2, a more robust OCR system PP-OCRv3 is proposed in this paper. PP-OCRv3 upgrades the text detection model and text recognition model in 9 aspects based on PP-OCRv2. For text detector, we introduce a PAN module with large receptive field named LK-PAN, a FPN module with residual attention mechanism named RSE-FPN, and DML distillation strategy. For text recognizer, the base model is replaced from CRNN to SVTR, and we introduce lightweight text recognition network SVTR LCNet, guided training of CTC by attention, data augmentation strategy TextConAug, better pre-trained model by self-supervised TextRotNet, UDML, and UIM to accelerate the model and improve the effect. Experiments on real data show that the hmean of PP-OCRv3 is 5% higher than PP-OCRv2 under comparable inference speed. All the above mentioned models are open-sourced and the code is available in the GitHub repository PaddleOCR which is powered by PaddlePaddle.</div></details></td>
        <td>-</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
3211
    </tr>
3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357
    <tr>
        <td>45</td>
        <td>ch_PP-OCRv3_rec</td>
        <td><a href="https://paperswithcode.com/paper/pp-ocrv3-more-attempts-for-the-improvement-of">PP-OCRv3: More Attempts for the Improvement of Ultra Lightweight OCR System</a></td>
        <td><details><summary>Abstract</summary><div>Optical character recognition (OCR) technology has been widely used in various scenes, as shown in Figure 1. Designing a practical OCR system is still a meaningful but challenging task. In previous work, considering the efficiency and accuracy, we proposed a practical ultra lightweight OCR system (PP-OCR), and an optimized version PP-OCRv2. In order to further improve the performance of PP-OCRv2, a more robust OCR system PP-OCRv3 is proposed in this paper. PP-OCRv3 upgrades the text detection model and text recognition model in 9 aspects based on PP-OCRv2. For text detector, we introduce a PAN module with large receptive field named LK-PAN, a FPN module with residual attention mechanism named RSE-FPN, and DML distillation strategy. For text recognizer, the base model is replaced from CRNN to SVTR, and we introduce lightweight text recognition network SVTR LCNet, guided training of CTC by attention, data augmentation strategy TextConAug, better pre-trained model by self-supervised TextRotNet, UDML, and UIM to accelerate the model and improve the effect. Experiments on real data show that the hmean of PP-OCRv3 is 5% higher than PP-OCRv2 under comparable inference speed. All the above mentioned models are open-sourced and the code is available in the GitHub repository PaddleOCR which is powered by PaddlePaddle.</div></details></td>
        <td>-</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>46</td>
        <td>ch_PP-OCRv3_rec_PACT</td>
        <td><a href="https://paperswithcode.com/paper/pp-ocrv3-more-attempts-for-the-improvement-of">PP-OCRv3: More Attempts for the Improvement of Ultra Lightweight OCR System</a></td>
        <td><details><summary>Abstract</summary><div>Optical character recognition (OCR) technology has been widely used in various scenes, as shown in Figure 1. Designing a practical OCR system is still a meaningful but challenging task. In previous work, considering the efficiency and accuracy, we proposed a practical ultra lightweight OCR system (PP-OCR), and an optimized version PP-OCRv2. In order to further improve the performance of PP-OCRv2, a more robust OCR system PP-OCRv3 is proposed in this paper. PP-OCRv3 upgrades the text detection model and text recognition model in 9 aspects based on PP-OCRv2. For text detector, we introduce a PAN module with large receptive field named LK-PAN, a FPN module with residual attention mechanism named RSE-FPN, and DML distillation strategy. For text recognizer, the base model is replaced from CRNN to SVTR, and we introduce lightweight text recognition network SVTR LCNet, guided training of CTC by attention, data augmentation strategy TextConAug, better pre-trained model by self-supervised TextRotNet, UDML, and UIM to accelerate the model and improve the effect. Experiments on real data show that the hmean of PP-OCRv3 is 5% higher than PP-OCRv2 under comparable inference speed. All the above mentioned models are open-sourced and the code is available in the GitHub repository PaddleOCR which is powered by PaddlePaddle.</div></details></td>
        <td>-</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>47</td>
        <td>ch_PP-OCRv3</td>
        <td><a href="https://paperswithcode.com/paper/pp-ocrv3-more-attempts-for-the-improvement-of">PP-OCRv3: More Attempts for the Improvement of Ultra Lightweight OCR System</a></td>
        <td><details><summary>Abstract</summary><div>Optical character recognition (OCR) technology has been widely used in various scenes, as shown in Figure 1. Designing a practical OCR system is still a meaningful but challenging task. In previous work, considering the efficiency and accuracy, we proposed a practical ultra lightweight OCR system (PP-OCR), and an optimized version PP-OCRv2. In order to further improve the performance of PP-OCRv2, a more robust OCR system PP-OCRv3 is proposed in this paper. PP-OCRv3 upgrades the text detection model and text recognition model in 9 aspects based on PP-OCRv2. For text detector, we introduce a PAN module with large receptive field named LK-PAN, a FPN module with residual attention mechanism named RSE-FPN, and DML distillation strategy. For text recognizer, the base model is replaced from CRNN to SVTR, and we introduce lightweight text recognition network SVTR LCNet, guided training of CTC by attention, data augmentation strategy TextConAug, better pre-trained model by self-supervised TextRotNet, UDML, and UIM to accelerate the model and improve the effect. Experiments on real data show that the hmean of PP-OCRv3 is 5% higher than PP-OCRv2 under comparable inference speed. All the above mentioned models are open-sourced and the code is available in the GitHub repository PaddleOCR which is powered by PaddlePaddle.</div></details></td>
        <td>-</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>48</td>
        <td>rec_svtrnet</td>
        <td><a href="https://paperswithcode.com/paper/svtr-scene-text-recognition-with-a-single">SVTR: Scene Text Recognition with a Single Visual Model</a></td>
        <td><details><summary>Abstract</summary><div>Dominant scene text recognition models commonly contain two building blocks, a visual model for feature extraction and a sequence model for text transcription. This hybrid architecture, although accurate, is complex and less efficient. In this study, we propose a Single Visual model for Scene Text recognition within the patch-wise image tokenization framework, which dispenses with the sequential modeling entirely. The method, termed SVTR, firstly decomposes an image text into small patches named character components. Afterward, hierarchical stages are recurrently carried out by component-level mixing, merging and/or combining. Global and local mixing blocks are devised to perceive the inter-character and intra-character patterns, leading to a multi-grained character component perception. Thus, characters are recognized by a simple linear prediction. Experimental results on both English and Chinese scene text recognition tasks demonstrate the effectiveness of SVTR. SVTR-L (Large) achieves highly competitive accuracy in English and outperforms existing methods by a large margin in Chinese, while running faster. In addition, SVTR-T (Tiny) is an effective and much smaller model, which shows appealing speed at inference. The code is publicly available at https://github.com/PaddlePaddle/PaddleOCR.</div></details></td>
        <td>IIIT, SVT, IC03, IC13</br>, IC15, SVTP, CUTE / avg_acc / 89.25%</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_en/quickstart_en.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>49</td>
        <td>PP-StructureV2</td>
        <td><a href="https://paperswithcode.com/paper/pp-structurev2-a-stronger-document-analysis">PP-StructureV2: A Stronger Document Analysis System</a></td>
        <td><details><summary>Abstract</summary><div>A large amount of document data exists in unstructured form such as raw images without any text information. Designing a practical document image analysis system is a meaningful but challenging task. In previous work, we proposed an intelligent document analysis system PP-Structure. In order to further upgrade the function and performance of PP-Structure, we propose PP-StructureV2 in this work, which contains two subsystems: Layout Information Extraction and Key Information Extraction. Firstly, we integrate Image Direction Correction module and Layout Restoration module to enhance the functionality of the system. Secondly, 8 practical strategies are utilized in PP-StructureV2 for better performance. For Layout Analysis model, we introduce ultra light-weight detector PP-PicoDet and knowledge distillation algorithm FGD for model lightweighting, which increased the inference speed by 11 times with comparable mAP. For Table Recognition model, we utilize PP-LCNet, CSP-PAN and SLAHead to optimize the backbone module, feature fusion module and decoding module, respectively, which improved the table structure accuracy by 6\% with comparable inference speed. For Key Information Extraction model, we introduce VI-LayoutXLM which is a visual-feature independent LayoutXLM architecture, TB-YX sorting algorithm and U-DML knowledge distillation algorithm, which brought 2.8\% and 9.1\% improvement respectively on the Hmean of Semantic Entity Recognition and Relation Extraction tasks. All the above mentioned models and code are open-sourced in the GitHub repository PaddleOCR.</div></details></td>
        <td>暂无</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/ppstructure/docs/quickstart_en.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>50</td>
        <td>DRRG</td>
        <td><a href="https://paperswithcode.com/paper/deep-relational-reasoning-graph-network-for">Deep Relational Reasoning Graph Network for Arbitrary Shape Text Detection</a></td>
        <td><details><summary>Abstract</summary><div>Arbitrary shape text detection is a challenging task due to the high variety and complexity of scenes texts. In this paper, we propose a novel unified relational reasoning graph network for arbitrary shape text detection. In our method, an innovative local graph bridges a text proposal model via Convolutional Neural Network (CNN) and a deep relational reasoning network via Graph Convolutional Network (GCN), making our network end-to-end trainable. To be concrete, every text instance will be divided into a series of small rectangular components, and the geometry attributes (e.g., height, width, and orientation) of the small components will be estimated by our text proposal model. Given the geometry attributes, the local graph construction model can roughly establish linkages between different text components. For further reasoning and deducing the likelihood of linkages between the component and its neighbors, we adopt a graph-based network to perform deep relational reasoning on local graphs. Experiments on public available datasets demonstrate the state-of-the-art performance of our method.</div></details></td>
        <td>CTW hmean=85.18%</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/doc/doc_en/algorithm_det_drrg_en.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>51</td>
        <td>DB++</td>
        <td><a href="https://paperswithcode.com/paper/real-time-scene-text-detection-with-1">Real-Time Scene Text Detection with Differentiable Binarization and Adaptive Scale Fusion</a></td>
        <td><details><summary>Abstract</summary><div>Recently, segmentation-based scene text detection methods have drawn extensive attention in the scene text detection field, because of their superiority in detecting the text instances of arbitrary shapes and extreme aspect ratios, profiting from the pixel-level descriptions. However, the vast majority of the existing segmentation-based approaches are limited to their complex post-processing algorithms and the scale robustness of their segmentation models, where the post-processing algorithms are not only isolated to the model optimization but also time-consuming and the scale robustness is usually strengthened by fusing multi-scale feature maps directly. In this paper, we propose a Differentiable Binarization (DB) module that integrates the binarization process, one of the most important steps in the post-processing procedure, into a segmentation network. Optimized along with the proposed DB module, the segmentation network can produce more accurate results, which enhances the accuracy of text detection with a simple pipeline. Furthermore, an efficient Adaptive Scale Fusion (ASF) module is proposed to improve the scale robustness by fusing features of different scales adaptively. By incorporating the proposed DB and ASF with the segmentation network, our proposed scene text detector consistently achieves state-of-the-art results, in terms of both detection accuracy and speed, on five standard benchmarks.</div></details></td>
        <td>ICDAR2015 hmean=86.58</br>%</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/doc/doc_en/algorithm_det_db_en.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>52</td>
        <td>ViTSTR</td>
        <td><a href="https://paperswithcode.com/paper/vision-transformer-for-fast-and-efficient">Vision Transformer for Fast and Efficient Scene Text Recognition</a></td>
        <td><details><summary>Abstract</summary><div>Scene text recognition (STR) enables computers to read text in natural scenes such as object labels, road signs and instructions. STR helps machines perform informed decisions such as what object to pick, which direction to go, and what is the next step of action. In the body of work on STR, the focus has always been on recognition accuracy. There is little emphasis placed on speed and computational efficiency which are equally important especially for energy-constrained mobile machines. In this paper we propose ViTSTR, an STR with a simple single stage model architecture built on a compute and parameter efficient vision transformer (ViT). On a comparable strong baseline method such as TRBA with accuracy of 84.3%, our small ViTSTR achieves a competitive accuracy of 82.6% (84.2% with data augmentation) at 2.4x speed up, using only 43.4% of the number of parameters and 42.2% FLOPS. The tiny version of ViTSTR achieves 80.3% accuracy (82.1% with data augmentation), at 2.5x the speed, requiring only 10.9% of the number of parameters and 11.9% FLOPS. With data augmentation, our base ViTSTR outperforms TRBA at 85.2% accuracy (83.7% without augmentation) at 2.3x the speed but requires 73.2% more parameters and 61.5% more FLOPS. In terms of trade-offs, nearly all ViTSTR configurations are at or near the frontiers to maximize accuracy, speed and computational efficiency all at the same time.</div></details></td>
        <td>acc=79.82%</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_en/algorithm_rec_vitstr_en.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>53</td>
        <td>ABINet</td>
        <td><a href="https://paperswithcode.com/paper/read-like-humans-autonomous-bidirectional-and">Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition</a></td>
        <td><details><summary>Abstract</summary><div>Linguistic knowledge is of great benefit to scene text recognition. However, how to effectively model linguistic rules in end-to-end deep networks remains a research challenge. In this paper, we argue that the limited capacity of language models comes from: 1) implicitly language modeling; 2) unidirectional feature representation; and 3) language model with noise input. Correspondingly, we propose an autonomous, bidirectional and iterative ABINet for scene text recognition. Firstly, the autonomous suggests to block gradient flow between vision and language models to enforce explicitly language modeling. Secondly, a novel bidirectional cloze network (BCN) as the language model is proposed based on bidirectional feature representation. Thirdly, we propose an execution manner of iterative correction for language model which can effectively alleviate the impact of noise input. Additionally, based on the ensemble of iterative predictions, we propose a self-training method which can learn from unlabeled images effectively. Extensive experiments indicate that ABINet has superiority on low-quality images and achieves state-of-the-art results on several mainstream benchmarks. Besides, the ABINet trained with ensemble self-training shows promising improvement in realizing human-level recognition</div></details></td>
        <td>acc=90.75%</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_en/algorithm_rec_abinet_en.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>54</td>
        <td>VisionLAN</td>
        <td><a href="https://paperswithcode.com/paper/from-two-to-one-a-new-scene-text-recognizer">From Two to One: A New Scene Text Recognizer with Visual Language Modeling Network</a></td>
        <td><details><summary>Abstract</summary><div>In this paper, we abandon the dominant complex language model and rethink the linguistic learning process in the scene text recognition. Different from previous methods considering the visual and linguistic information in two separate structures, we propose a Visual Language Modeling Network (VisionLAN), which views the visual and linguistic information as a union by directly enduing the vision model with language capability. Specially, we introduce the text recognition of character-wise occluded feature maps in the training stage. Such operation guides the vision model to use not only the visual texture of characters, but also the linguistic information in visual context for recognition when the visual cues are confused (e.g. occlusion, noise, etc.). As the linguistic information is acquired along with visual features without the need of extra language model, VisionLAN significantly improves the speed by 39% and adaptively considers the linguistic information to enhance the visual features for accurate recognition. Furthermore, an Occlusion Scene Text (OST) dataset is proposed to evaluate the performance on the case of missing character-wise visual cues. The state of-the-art results on several benchmarks prove our effectiveness. </div></details></td>
        <td>acc=90.30%    </td>
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_en/algorithm_rec_visionlan_en.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>55</td>
        <td>SPIN</td>
        <td><a href="https://paperswithcode.com/paper/spin-structure-preserving-inner-offset">SPIN: Structure-Preserving Inner Offset Network for Scene Text Recognition</a></td>
        <td><details><summary>Abstract</summary><div>Arbitrary text appearance poses a great challenge in scene text recognition tasks. Existing works mostly handle with the problem in consideration of the shape distortion, including perspective distortions, line curvature or other style variations. Therefore, methods based on spatial transformers are extensively studied. However, chromatic difficulties in complex scenes have not been paid much attention on. In this work, we introduce a new learnable geometric-unrelated module, the Structure-Preserving Inner Offset Network (SPIN), which allows the color manipulation of source data within the network. This differentiable module can be inserted before any recognition architecture to ease the downstream tasks, giving neural networks the ability to actively transform input intensity rather than the existing spatial rectification. It can also serve as a complementary module to known spatial transformations and work in both independent and collaborative ways with them. Extensive experiments show that the use of SPIN results in a significant improvement on multiple text recognition benchmarks compared to the state-of-the-arts.</div></details></td>
        <td>acc=90.00%    </td>
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_en/algorithm_rec_spin_en.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>56</td>
        <td>RobustScanner</td>
        <td><a href="https://paperswithcode.com/paper/robustscanner-dynamically-enhancing">RobustScanner: Dynamically Enhancing Positional Clues for Robust Text Recognition</a></td>
        <td><details><summary>Abstract</summary><div>The attention-based encoder-decoder framework has recently achieved impressive results for scene text recognition, and many variants have emerged with improvements in recognition quality. However, it performs poorly on contextless texts (e.g., random character sequences) which is unacceptable in most of real application scenarios. In this paper, we first deeply investigate the decoding process of the decoder. We empirically find that a representative character-level sequence decoder utilizes not only context information but also positional information. Contextual information, which the existing approaches heavily rely on, causes the problem of attention drift. To suppress such side-effect, we propose a novel position enhancement branch, and dynamically fuse its outputs with those of the decoder attention module for scene text recognition. Specifically, it contains a position aware module to enable the encoder to output feature vectors encoding their own spatial positions, and an attention module to estimate glimpses using the positional clue (i.e., the current decoding time step) only. The dynamic fusion is conducted for more robust feature via an element-wise gate mechanism. Theoretically, our proposed method, dubbed \emph{RobustScanner}, decodes individual characters with dynamic ratio between context and positional clues, and utilizes more positional ones when the decoding sequences with scarce context, and thus is robust and practical. Empirically, it has achieved new state-of-the-art results on popular regular and irregular text recognition benchmarks while without much performance drop on contextless benchmarks, validating its robustness in both contextual and contextless application scenarios.</div></details></td>
        <td>acc=87.77%    </td>
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_en/algorithm_rec_robustscanner_en.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>57</td>
        <td>RFL</td>
        <td><a href="https://paperswithcode.com/paper/reciprocal-feature-learning-via-explicit-and">Reciprocal Feature Learning via Explicit and Implicit Tasks in Scene Text Recognition</a></td>
        <td><details><summary>Abstract</summary><div>Text recognition is a popular topic for its broad applications. In this work, we excavate the implicit task, character counting within the traditional text recognition, without additional labor annotation cost. The implicit task plays as an auxiliary branch for complementing the sequential recognition. We design a two-branch reciprocal feature learning framework in order to adequately utilize the features from both the tasks. Through exploiting the complementary effect between explicit and implicit tasks, the feature is reliably enhanced. Extensive experiments on 7 benchmarks show the advantages of the proposed methods in both text recognition and the new-built character counting tasks. In addition, it is convenient yet effective to equip with variable networks and tasks. We offer abundant ablation studies, generalizing experiments with deeper understanding on the tasks.</div></details></td>
        <td>acc=88.63%    </td>
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_en/algorithm_rec_rfl_en.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>58</td>
        <td>TableMaster</td>
        <td><a href="https://paperswithcode.com/paper/pingan-vcgroup-s-solution-for-icdar-2021"> PINGAN-VCGROUP’S SOLUTION FOR ICDAR 2021 COMPETITION ON SCIENTIFIC LITERATURE PARSING TASK B: TABLE RECOGNITION TO HTML</a></td>
        <td><details><summary>Abstract</summary><div>This paper presents our solution for ICDAR 2021 competition on scientific literature parsing taskB: table recognition to HTML. In our method, we divide the table content recognition task into foursub-tasks: table structure recognition, text line detection, text line recognition, and box assignment.Our table structure recognition algorithm is customized based on MASTER [1], a robust image textrecognition algorithm. PSENet [2] is used to detect each text line in the table image. For text linerecognition, our model is also built on MASTER. Finally, in the box assignment phase, we associatedthe text boxes detected by PSENet with the structure item reconstructed by table structure prediction,and fill the recognized content of the text line into the corresponding item. Our proposed methodachieves a 96.84% TEDS score on 9,115 validation samples in the development phase, and a 96.32%TEDS score on 9,064 samples in the final evaluation phase.</div></details></td>
        <td>PubTabNet acc=77.47%</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/doc/doc_en/algorithm_table_master_en.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>59</td>
        <td>PGNet</td>
        <td><a href="https://paperswithcode.com/paper/pgnet-real-time-arbitrarily-shaped-text">PGNet: Real-time Arbitrarily-Shaped Text Spotting PGNet: Real-time Arbitrarily-Shaped Text Spottingwith Point Gathering Network</a></td>
        <td><details><summary>Abstract</summary><div>The reading of arbitrarily-shaped text has received increasing research attention. However, existing text spotters are mostly built on two-stage frameworks or character-based methods, which suffer from either Non-Maximum Suppression (NMS), Region-of-Interest (RoI) operations, or character-level annotations. In this paper, to address the above problems, we propose a novel fully convolutional Point Gathering Network (PGNet) for reading arbitrarily-shaped text in real-time. The PGNet is a single-shot text spotter, where the pixel-level character classification map is learned with proposed PG-CTC loss avoiding the usage of character-level annotations. With a PG-CTC decoder, we gather high-level character classification vectors from two-dimensional space and decode them into text symbols without NMS and RoI operations involved, which guarantees high efficiency. Additionally, reasoning the relations between each character and its neighbors, a graph refinement module (GRM) is proposed to optimize the coarse recognition and improve the end-to-end performance. Experiments prove that the proposed method achieves competitive accuracy, meanwhile significantly improving the running speed. In particular, in Total-Text, it runs at 46.7 FPS, surpassing the previous spotters with a large margin.</div></details></td>
        <td>Total Text e2e_f_scor</br>e=60.03%</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/doc/doc_en/algorithm_e2e_pgnet_en.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>60</td>
        <td>VI-LayoutXLM</td>
        <td><a href="https://paperswithcode.com/paper/pp-structurev2-a-stronger-document-analysis">PP-StructureV2: A Stronger Document Analysis System</a></td>
        <td><details><summary>Abstract</summary><div>A large amount of document data exists in unstructured form such as raw images without any text information. Designing a practical document image analysis system is a meaningful but challenging task. In previous work, we proposed an intelligent document analysis system PP-Structure. In order to further upgrade the function and performance of PP-Structure, we propose PP-StructureV2 in this work, which contains two subsystems: Layout Information Extraction and Key Information Extraction. Firstly, we integrate Image Direction Correction module and Layout Restoration module to enhance the functionality of the system. Secondly, 8 practical strategies are utilized in PP-StructureV2 for better performance. For Layout Analysis model, we introduce ultra light-weight detector PP-PicoDet and knowledge distillation algorithm FGD for model lightweighting, which increased the inference speed by 11 times with comparable mAP. For Table Recognition model, we utilize PP-LCNet, CSP-PAN and SLAHead to optimize the backbone module, feature fusion module and decoding module, respectively, which improved the table structure accuracy by 6\% with comparable inference speed. For Key Information Extraction model, we introduce VI-LayoutXLM which is a visual-feature independent LayoutXLM architecture, TB-YX sorting algorithm and U-DML knowledge distillation algorithm, which brought 2.8\% and 9.1\% improvement respectively on the Hmean of Semantic Entity Recognition and Relation Extraction tasks. All the above mentioned models and code are open-sourced in the GitHub repository PaddleOCR.</div></details></td>
        <td>SER=93.19%, RE=83.92%</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleOCR/blob/release%2F2.6/doc/doc_en/algorithm_kie_vi_layoutxlm_en.md">快速开始</a></td>
        </td>
    </tr>
</table>

3358 3359 3360 3361 3362 3363 3364 3365 3366
### PaddleGAN
<table>
    <tr>
        <th>序号</th>
        <th>模型简称</th>
        <th>论文名称(链接)</th>
        <th>摘要</th>
        <th>数据集</th>
        <th width='10%'>快速开始</th>
3367
        </th>
3368 3369 3370 3371
    </tr>
    <tr>
        <td>1</td>
        <td>PP-MSVSR</td>
3372
        <td><a href="https://paperswithcode.com/paper/pp-msvsr-multi-stage-video-super-resolution">PP-MSVSR: Multi-Stage Video Super-Resolution</a></td>
3373 3374
        <td><details><summary>Abstract</summary><div>Different from the Single Image Super-Resolution(SISR) task, the key for Video Super-Resolution(VSR) task is to make full use of complementary information across frames to reconstruct the high-resolution sequence. Since images from different frames with diverse motion and scene, accurately aligning multiple frames and effectively fusing different frames has always been the key research work of VSR tasks. To utilize rich complementary information of neighboring frames, in this paper, we propose a multi-stage VSR deep architecture, dubbed as PP-MSVSR, with local fusion module, auxiliary loss and re-align module to refine the enhanced result progressively. Specifically, in order to strengthen the fusion of features across frames in feature propagation, a local fusion module is designed in stage-1 to perform local feature fusion before feature propagation. Moreover, we introduce an auxiliary loss in stage-2 to make the features obtained by the propagation module reserve more correlated information connected to the HR space, and introduce a re-align module in stage-3 to make full use of the feature information of the previous stage. Extensive experiments substantiate that PP-MSVSR achieves a promising performance of Vid4 datasets, which achieves a PSNR of 28.13dB with only 1.45M parameters. And the PP-MSVSR-L exceeds all state of the art method on REDS4 datasets with considerable parameters. Code and models will be released in PaddleGAN\footnote{this https URL.}.</div></details></td>
        <td>REDS/psnr: 31.2535 ss</br>im:0.8884</td>
3375 3376
        <td><a href="">快速开始</a></td>
        </td>
3377 3378 3379 3380
    </tr>
    <tr>
        <td>2</td>
        <td>Pix2Pix</td>
3381
        <td><a href="https://paperswithcode.com/paper/image-to-image-translation-with-conditional">Image-to-Image Translation with Conditional Adversarial Networks</a></td>
3382 3383
        <td><details><summary>Abstract</summary><div>We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.</div></details></td>
        <td>facades/fid:119.135</td>
3384 3385
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/pix2pix_cyclegan.md">快速开始</a></td>
        </td>
3386 3387 3388 3389
    </tr>
    <tr>
        <td>3</td>
        <td>CycleGAN</td>
3390
        <td><a href="https://paperswithcode.com/paper/unpaired-image-to-image-translation-using">Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networkss</a></td>
3391 3392
        <td><details><summary>Abstract</summary><div>Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G:X→Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F:Y→X and introduce a cycle consistency loss to push F(G(X))≈X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.</div></details></td>
        <td>facades/fid:123.626</td>
3393 3394
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/pix2pix_cyclegan.md">快速开始</a></td>
        </td>
3395 3396 3397 3398
    </tr>
    <tr>
        <td>4</td>
        <td>PSGAN</td>
3399
        <td><a href="https://paperswithcode.com/paper/psgan-pose-robust-spatial-aware-gan-for">PSGAN: Pose and Expression Robust Spatial-Aware GAN for Customizable Makeup Transfer</a></td>
3400 3401
        <td><details><summary>Abstract</summary><div>In this paper, we address the makeup transfer task, which aims to transfer the makeup from a reference image to a source image. Existing methods have achieved promising progress in constrained scenarios, but transferring between images with large pose and expression differences is still challenging. Besides, they cannot realize customizable transfer that allows a controllable shade of makeup or specifies the part to transfer, which limits their applications. To address these issues, we propose Pose and expression robust Spatial-aware GAN (PSGAN). It first utilizes Makeup Distill Network to disentangle the makeup of the reference image as two spatial-aware makeup matrices. Then, Attentive Makeup Morphing module is introduced to specify how the makeup of a pixel in the source image is morphed from the reference image. With the makeup matrices and the source image, Makeup Apply Network is used to perform makeup transfer. Our PSGAN not only achieves state-of-the-art results even when large pose and expression differences exist but also is able to perform partial and shade-controllable makeup transfer. We also collected a dataset containing facial images with various poses and expressions for evaluations.</div></details></td>
        <td>MT, landmarks</td>
3402 3403
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/psgan.md">快速开始</a></td>
        </td>
3404 3405 3406 3407
    </tr>
    <tr>
        <td>5</td>
        <td>Wav2Lip</td>
3408
        <td><a href="https://paperswithcode.com/paper/a-lip-sync-expert-is-all-you-need-for-speech">A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild</a></td>
3409 3410
        <td><details><summary>Abstract</summary><div>In this work, we investigate the problem of lip-syncing a talking face video of an arbitrary identity to match a target speech segment. Current works excel at producing accurate lip movements on a static image or videos of specific people seen during the training phase. However, they fail to accurately morph the lip movements of arbitrary identities in dynamic, unconstrained talking face videos, resulting in significant parts of the video being out-of-sync with the new audio. We identify key reasons pertaining to this and hence resolve them by learning from a powerful lip-sync discriminator. Next, we propose new, rigorous evaluation benchmarks and metrics to accurately measure lip synchronization in unconstrained videos. Extensive quantitative evaluations on our challenging benchmarks show that the lip-sync accuracy of the videos generated by our Wav2Lip model is almost as good as real synced videos. We provide a demo video clearly showing the substantial impact of our Wav2Lip model and evaluation benchmarks on our website: \url{this http URL}. The code and models are released at this GitHub repository: \url{this http URL}. You can also try out the interactive demo at this link: \url{this http URL}.</div></details></td>
        <td>LRS2</td>
3411 3412
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/wav2lip.md">快速开始</a></td>
        </td>
3413 3414 3415 3416
    </tr>
    <tr>
        <td>6</td>
        <td>LESRCNN</td>
3417
        <td><a href="https://paperswithcode.com/paper/lightweight-image-super-resolution-with-2">Lightweight image super-resolution with enhanced CNN</a></td>
3418 3419
        <td><details><summary>Abstract</summary><div>Deep convolutional neural networks (CNNs) with strong expressive ability have achieved impressive performances on single image super-resolution (SISR). However, their excessive amounts of convolutions and parameters usually consume high computational cost and more memory storage for training a SR model, which limits their applications to SR with resource-constrained devices in real world. To resolve these problems, we propose a lightweight enhanced SR CNN (LESRCNN) with three successive sub-blocks, an information extraction and enhancement block (IEEB), a reconstruction block (RB) and an information refinement block (IRB). Specifically, the IEEB extracts hierarchical low-resolution (LR) features and aggregates the obtained features step-by-step to increase the memory ability of the shallow layers on deep layers for SISR. To remove redundant information obtained, a heterogeneous architecture is adopted in the IEEB. After that, the RB converts low-frequency features into high-frequency features by fusing global and local features, which is complementary with the IEEB in tackling the long-term dependency problem. Finally, the IRB uses coarse high-frequency features from the RB to learn more accurate SR features and construct a SR image. The proposed LESRCNN can obtain a high-quality image by a model for different scales. Extensive experiments demonstrate that the proposed LESRCNN outperforms state-of-the-arts on SISR in terms of qualitative and quantitative evaluation. The code of LESRCNN is accessible on this https URL.</div></details></td>
        <td>DIV2K/pnsr: 30.231 ss</br>im:0.8326</td>
3420 3421
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/single_image_super_resolution.md">快速开始</a></td>
        </td>
3422 3423 3424 3425
    </tr>
    <tr>
        <td>7</td>
        <td>ESRGAN</td>
3426
        <td><a href="https://paperswithcode.com/paper/esrgan-enhanced-super-resolution-generative">Esrgan: Enhanced super-resolution generative adversarial networks</a></td>
3427 3428
        <td><details><summary>Abstract</summary><div>The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN - network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge. The code is available at this https URL</div></details></td>
        <td>DIV2K/pnsr: 26.9013 s</br>sim: 0.7542</td>
3429 3430
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/single_image_super_resolution.md">快速开始</a></td>
        </td>
3431 3432 3433 3434
    </tr>
    <tr>
        <td>8</td>
        <td>RealSR</td>
3435
        <td><a href="https://paperswithcode.com/paper/real-world-super-resolution-via-kernel">Real-World Super-Resolution via Kernel Estimation and Noise Injection</a></td>
3436 3437
        <td><details><summary>Abstract</summary><div>Recent state-of-the-art super-resolution methods have achieved impressive performance on ideal datasets regardless of blur and noise. However, these methods always fail in real-world image super-resolution, since most of them adopt simple bicubic downsampling from high-quality images to construct Low-Resolution (LR) and High-Resolution (HR) pairs for training which may lose track of frequency-related details. To address this issue, we focus on designing a novel degradation framework for real-world images by estimating various blur kernels as well as real noise distributions. Based on our novel degradation framework, we can acquire LR images sharing a common domain with real-world images. Then, we propose a real-world super-resolution model aiming at better perception. Extensive experiments on synthetic noise data and real-world images demonstrate that our method outperforms the state-of-the-art methods, resulting in lower noise and better visual quality. In addition, our method is the winner of NTIRE 2020 Challenge on both tracks of Real-World Super-Resolution, which significantly outperforms other competitors by large margins.</div></details></td>
        <td>DIV2K/pnsr:26.7306 ss</br>im:0.7512</td>
3438 3439
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/single_image_super_resolution.md">快速开始</a></td>
        </td>
3440 3441 3442 3443
    </tr>
    <tr>
        <td>9</td>
        <td>StyleGAN2</td>
3444
        <td><a href="https://paperswithcode.com/paper/analyzing-and-improving-the-image-quality-of">Analyzing and Improving the Image Quality of StyleGAN</a></td>
3445 3446
        <td><details><summary>Abstract</summary><div>The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. In particular, we redesign the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images. In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably attribute a generated image to a particular network. We furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.</div></details></td>
        <td>ffhq/fid</td>
3447 3448
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/styleganv2.md">快速开始</a></td>
        </td>
3449 3450 3451 3452
    </tr>
    <tr>
        <td>10</td>
        <td>U-GAT-IT</td>
3453
        <td><a href="https://paperswithcode.com/paper/u-gat-it-unsupervised-generative-attentional">U-GAT-IT: unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation</a></td>
3454 3455
        <td><details><summary>Abstract</summary><div>We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. The attention module guides our model to focus on more important regions distinguishing between source and target domains based on the attention map obtained by the auxiliary classifier. Unlike previous attention-based method which cannot handle the geometric changes between domains, our model can translate both images requiring holistic changes and images requiring large shape changes. Moreover, our new AdaLIN (Adaptive Layer-Instance Normalization) function helps our attention-guided model to flexibly control the amount of change in shape and texture by learned parameters depending on datasets. Experimental results show the superiority of the proposed method compared to the existing state-of-the-art models with a fixed network architecture and hyper-parameters. Our code and datasets are available at this https URL or this https URL.</div></details></td>
        <td>Selfie2anime</td>
3456 3457
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/ugatit.md">快速开始</a></td>
        </td>
3458 3459 3460 3461
    </tr>
    <tr>
        <td>11</td>
        <td>AnimeGAN2</td>
3462
        <td><a href="暂无">AnimeGAN: A Novel Lightweight GAN for Photo Animation</a></td>
3463 3464
        <td><details><summary>Abstract</summary><div>ransforming photos of real-world scenes into anime style images is a meaningful and challenging task in terms of computer vision and artistic style transfer. Our previously proposed AnimeGAN combines neural style transfer and generative adversarial network (GAN) to accomplish this task. However, AnimeGAN still has some obvious problems, such as high-frequency artifacts in the images generated by the model. Therefore, in this research, we propose an improved version of AnimeGAN, namely AnimeGANv2. It prevents the generation of high-frequency artifacts by simply changing the normalization of features in the network. In addition, we further reduce the scale of the generator network to achieve more efficient animation style transfer. AnimeGANv2 trained on the newly established high-quality dataset can generate animation images with better visual quality than AnimeGAN.</div></details></td>
        <td>Hayao_styleData-V2</td>
3465 3466
        <td><a href="">快速开始</a></td>
        </td>
3467 3468 3469 3470
    </tr>
    <tr>
        <td>12</td>
        <td>Photo2Cartoon</td>
3471
        <td><a href="https://paperswithcode.com/paper/u-gat-it-unsupervised-generative-attentional">U-GAT-IT: unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation</a></td>
3472 3473
        <td><details><summary>Abstract</summary><div>We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. The attention module guides our model to focus on more important regions distinguishing between source and target domains based on the attention map obtained by the auxiliary classifier. Unlike previous attention-based method which cannot handle the geometric changes between domains, our model can translate both images requiring holistic changes and images requiring large shape changes. Moreover, our new AdaLIN (Adaptive Layer-Instance Normalization) function helps our attention-guided model to flexibly control the amount of change in shape and texture by learned parameters depending on datasets. Experimental results show the superiority of the proposed method compared to the existing state-of-the-art models with a fixed network architecture and hyper-parameters. Our code and datasets are available at this https URL or this https URL.</div></details></td>
        <td>photo2cartoon</td>
3474 3475
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/photo2cartoon.md">快速开始</a></td>
        </td>
3476 3477 3478 3479
    </tr>
    <tr>
        <td>13</td>
        <td>DRN</td>
3480
        <td><a href="https://paperswithcode.com/paper/closed-loop-matters-dual-regression-networks">Closed-loop Matters: Dual Regression Networks for Single Image Super-Resolution</a></td>
3481 3482
        <td><details><summary>Abstract</summary><div>Deep neural networks have exhibited promising performance in image super-resolution (SR) by learning a nonlinear mapping function from low-resolution (LR) images to high-resolution (HR) images. However, there are two underlying limitations to existing SR methods. First, learning the mapping function from LR to HR images is typically an ill-posed problem, because there exist infinite HR images that can be downsampled to the same LR image. As a result, the space of the possible functions can be extremely large, which makes it hard to find a good solution. Second, the paired LR-HR data may be unavailable in real-world applications and the underlying degradation method is often unknown. For such a more general case, existing SR models often incur the adaptation problem and yield poor performance. To address the above issues, we propose a dual regression scheme by introducing an additional constraint on LR data to reduce the space of the possible functions. Specifically, besides the mapping from LR to HR images, we learn an additional dual regression mapping estimates the down-sampling kernel and reconstruct LR images, which forms a closed-loop to provide additional supervision. More critically, since the dual regression process does not depend on HR images, we can directly learn from LR images. In this sense, we can easily adapt SR models to real-world data, e.g., raw video frames from YouTube. Extensive experiments with paired training data and unpaired real-world data demonstrate our superiority over existing methods.</div></details></td>
        <td>DIV2K</td>
3483 3484
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/single_image_super_resolution.md">快速开始</a></td>
        </td>
3485 3486 3487 3488
    </tr>
    <tr>
        <td>14</td>
        <td>starGAN2</td>
3489
        <td><a href="https://paperswithcode.com/paper/stargan-v2-diverse-image-synthesis-for">StarGAN v2: Diverse Image Synthesis for Multiple Domains</a></td>
3490 3491
        <td><details><summary>Abstract</summary><div>A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains. Existing methods address either of the issues, having limited diversity or multiple models for all domains. We propose StarGAN v2, a single framework that tackles both and shows significantly improved results over the baselines. Experiments on CelebA-HQ and a new animal faces dataset (AFHQ) validate our superiority in terms of visual quality, diversity, and scalability. To better assess image-to-image translation models, we release AFHQ, high-quality animal faces with large inter- and intra-domain differences. The code, pretrained models, and dataset can be found at this https URL.</div></details></td>
        <td>CelebA-HQ</td>
3492 3493
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN">快速开始</a></td>
        </td>
3494 3495 3496 3497
    </tr>
    <tr>
        <td>15</td>
        <td>FOM</td>
3498
        <td><a href="https://paperswithcode.com/paper/first-order-motion-model-for-image-animation-1">First Order Motion Model for Image Animation</a></td>
3499 3500
        <td><details><summary>Abstract</summary><div>Image animation consists of generating a video sequence so that an object in a source image is animated according to the motion of a driving video. Our framework addresses this problem without using any annotation or prior information about the specific object to animate. Once trained on a set of videos depicting objects of the same category (e.g. faces, human bodies), our method can be applied to any object of this class. To achieve this, we decouple appearance and motion information using a self-supervised formulation. To support complex motions, we use a representation consisting of a set of learned keypoints along with their local affine transformations. A generator network models occlusions arising during target motions and combines the appearance extracted from the source image and the motion derived from the driving video. Our framework scores best on diverse benchmarks and on a variety of object categories</div></details></td>
        <td>VoxCeleb/l1loss:0.041</br>78</td>
3501 3502
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/motion_driving.md">快速开始</a></td>
        </td>
3503 3504 3505 3506
    </tr>
    <tr>
        <td>16</td>
        <td>EDVR</td>
3507
        <td><a href="https://paperswithcode.com/paper/edvr-video-restoration-with-enhanced">EDVR: Video Restoration with Enhanced Deformable Convolutional Networks</a></td>
3508 3509
        <td><details><summary>Abstract</summary><div>Video restoration tasks, including super-resolution, deblurring, etc, are drawing increasing attention in the computer vision community. A challenging benchmark named REDS is released in the NTIRE19 Challenge. This new benchmark challenges existing methods from two aspects: (1) how to align multiple frames given large motions, and (2) how to effectively fuse different frames with diverse motion and blur. In this work, we propose a novel Video Restoration framework with Enhanced Deformable networks, termed EDVR, to address these challenges. First, to handle large motions, we devise a Pyramid, Cascading and Deformable (PCD) alignment module, in which frame alignment is done at the feature level using deformable convolutions in a coarse-to-fine manner. Second, we propose a Temporal and Spatial Attention (TSA) fusion module, in which attention is applied both temporally and spatially, so as to emphasize important features for subsequent restoration. Thanks to these modules, our EDVR wins the champions and outperforms the second place by a large margin in all four tracks in the NTIRE19 video restoration and enhancement challenges. EDVR also demonstrates superior performance to state-of-the-art published methods on video super-resolution and deblurring</div></details></td>
        <td>REDS/pnsr:30.4429 ssi</br>m:0.8684</td>
3510 3511
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/video_super_resolution.md">快速开始</a></td>
        </td>
3512 3513 3514 3515
    </tr>
    <tr>
        <td>17</td>
        <td>BasicVSR++</td>
3516
        <td><a href="https://paperswithcode.com/paper/basicvsr-the-search-for-essential-components">BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment</a></td>
3517 3518
        <td><details><summary>Abstract</summary><div>Video super-resolution (VSR) approaches tend to have more components than the image counterparts as they need to exploit the additional temporal dimension. Complex designs are not uncommon. In this study, we wish to untangle the knots and reconsider some most essential components for VSR guided by four basic functionalities, i.e., Propagation, Alignment, Aggregation, and Upsampling. By reusing some existing components added with minimal redesigns, we show a succinct pipeline, BasicVSR, that achieves appealing improvements in terms of speed and restoration quality in comparison to many state-of-the-art algorithms. We conduct systematic analysis to explain how such gain can be obtained and discuss the pitfalls. We further show the extensibility of BasicVSR by presenting an information-refill mechanism and a coupled propagation scheme to facilitate information aggregation. The BasicVSR and its extension, IconVSR, can serve as strong baselines for future VSR approaches.</div></details></td>
        <td>REDS/pnsr:30.4429 ssi</br>m:0.8684</td>
3519 3520
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/video_super_resolution.md">快速开始</a></td>
        </td>
3521 3522 3523 3524
    </tr>
    <tr>
        <td>18</td>
        <td>BasicVSR</td>
3525
        <td><a href="https://paperswithcode.com/paper/basicvsr-the-search-for-essential-components">BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond</a></td>
3526 3527
        <td><details><summary>Abstract</summary><div>Video super-resolution (VSR) approaches tend to have more components than the image counterparts as they need to exploit the additional temporal dimension. Complex designs are not uncommon. In this study, we wish to untangle the knots and reconsider some most essential components for VSR guided by four basic functionalities, i.e., Propagation, Alignment, Aggregation, and Upsampling. By reusing some existing components added with minimal redesigns, we show a succinct pipeline, BasicVSR, that achieves appealing improvements in terms of speed and restoration quality in comparison to many state-of-the-art algorithms. We conduct systematic analysis to explain how such gain can be obtained and discuss the pitfalls. We further show the extensibility of BasicVSR by presenting an information-refill mechanism and a coupled propagation scheme to facilitate information aggregation. The BasicVSR and its extension, IconVSR, can serve as strong baselines for future VSR approaches.</div></details></td>
        <td>REDS/pnsr:30.4429 ssi</br>m:0.8684</td>
3528 3529
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/video_super_resolution.md">快速开始</a></td>
        </td>
3530 3531 3532 3533
    </tr>
    <tr>
        <td>19</td>
        <td>LapStyle</td>
3534
        <td><a href="https://paperswithcode.com/paper/drafting-and-revision-laplacian-pyramid">Drafting and Revision: Laplacian Pyramid Network for Fast High-Quality Artistic Style Transfer</a></td>
3535 3536
        <td><details><summary>Abstract</summary><div>Artistic style transfer aims at migrating the style from an example image to a content image. Currently, optimization-based methods have achieved great stylization quality, but expensive time cost restricts their practical applications. Meanwhile, feed-forward methods still fail to synthesize complex style, especially when holistic global and local patterns exist. Inspired by the common painting process of drawing a draft and revising the details, we introduce a novel feed-forward method named Laplacian Pyramid Network (LapStyle). LapStyle first transfers global style patterns in low-resolution via a Drafting Network. It then revises the local details in high-resolution via a Revision Network, which hallucinates a residual image according to the draft and the image textures extracted by Laplacian filtering. Higher resolution details can be easily generated by stacking Revision Networks with multiple Laplacian pyramid levels. The final stylized image is obtained by aggregating outputs of all pyramid levels. %We also introduce a patch discriminator to better learn local patterns adversarially. Experiments demonstrate that our method can synthesize high quality stylized images in real time, where holistic style patterns are properly transferred.</div></details></td>
        <td>coco</td>
3537 3538
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/lap_style.md">快速开始</a></td>
        </td>
3539 3540 3541 3542
    </tr>
    <tr>
        <td>20</td>
        <td>DCGAN</td>
3543
        <td><a href="https://paperswithcode.com/method/dcgan">Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks</a></td>
3544 3545
        <td><details><summary>Abstract</summary><div>In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.</div></details></td>
        <td>mnist</td>
3546 3547
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN">快速开始</a></td>
        </td>
3548 3549 3550 3551
    </tr>
    <tr>
        <td>21</td>
        <td>CGAN</td>
3552
        <td><a href="https://paperswithcode.com/paper/conditional-generative-adversarial-nets">Conditional Generative Adversarial Nets</a></td>
3553 3554
        <td><details><summary>Abstract</summary><div>Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.</div></details></td>
        <td>tiny imagenet</td>
3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 3571 3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 3589 3590 3591 3592 3593 3594 3595 3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 3621 3622 3623 3624 3625 3626 3627 3628 3629 3630 3631 3632 3633 3634 3635 3636 3637 3638 3639 3640 3641 3642 3643 3644 3645 3646
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>22</td>
        <td>PAN</td>
        <td><a href="https://paperswithcode.com/paper/efficient-image-super-resolution-using-pixel">Efficient Image Super-Resolution Using Pixel Attention</a></td>
        <td><details><summary>Abstract</summary><div>This work aims at designing a lightweight convolutional neural network for image super resolution (SR). With simplicity bare in mind, we construct a pretty concise and effective network with a newly proposed pixel attention scheme. Pixel attention (PA) is similar as channel attention and spatial attention in formulation. The difference is that PA produces 3D attention maps instead of a 1D attention vector or a 2D map. This attention scheme introduces fewer additional parameters but generates better SR results. On the basis of PA, we propose two building blocks for the main branch and the reconstruction branch, respectively. The first one - SC-PA block has the same structure as the Self-Calibrated convolution but with our PA layer. This block is much more efficient than conventional residual/dense blocks, for its twobranch architecture and attention scheme. While the second one - UPA block combines the nearest-neighbor upsampling, convolution and PA layers. It improves the final reconstruction quality with little parameter cost. Our final model- PAN could achieve similar performance as the lightweight networks - SRResNet and CARN, but with only 272K parameters (17.92% of SRResNet and 17.09% of CARN). The effectiveness of each proposed component is also validated by ablation study. The code is available at https://github.com/zhaohengyuan1/PAN.</div></details></td>
        <td>DIV2K/PSNR:28.9187 SS</br>IM:0.8176</td>
        <td><a href="">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>23</td>
        <td>PReNet</td>
        <td><a href="https://paperswithcode.com/paper/progressive-image-deraining-networks-a-better">Progressive Image Deraining Networks: A Better and Simpler Baseline</a></td>
        <td><details><summary>Abstract</summary><div>Along with the deraining performance improvement of deep networks, their structures and learning become more and more complicated and diverse, making it difficult to analyze the contribution of various network modules when developing new deraining networks. To handle this issue, this paper provides a better and simpler baseline deraining network by considering network architecture, input and output, and loss functions. Specifically, by repeatedly unfolding a shallow ResNet, progressive ResNet (PRN) is proposed to take advantage of recursive computation. A recurrent layer is further introduced to exploit the dependencies of deep features across stages, forming our progressive recurrent network (PReNet). Furthermore, intra-stage recursive computation of ResNet can be adopted in PRN and PReNet to notably reduce network parameters with graceful degradation in deraining performance. For network input and output, we take both stage-wise result and original rainy image as input to each ResNet and finally output the prediction of {residual image}. As for loss functions, single MSE or negative SSIM losses are sufficient to train PRN and PReNet. Experiments show that PRN and PReNet perform favorably on both synthetic and real rainy images. Considering its simplicity, efficiency and effectiveness, our models are expected to serve as a suitable baseline in future deraining research. The source codes are available at https://github.com/csdwren/PReNet.</div></details></td>
        <td>RainTrainH/PSNR:29.50</br>37 SSIM:0.899</td>
        <td><a href="">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>24</td>
        <td>SinGAN</td>
        <td><a href="https://paperswithcode.com/paper/singan-learning-a-generative-model-from-a">SinGAN: Learning a Generative Model from a Single Natural Image</a></td>
        <td><details><summary>Abstract</summary><div>We introduce SinGAN, an unconditional generative model that can be learned from a single natural image. Our model is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples that carry the same visual content as the image. SinGAN contains a pyramid of fully convolutional GANs, each responsible for learning the patch distribution at a different scale of the image. This allows generating new samples of arbitrary size and aspect ratio, that have significant variability, yet maintain both the global structure and the fine textures of the training image. In contrast to previous single image GAN schemes, our approach is not limited to texture images, and is not conditional (i.e. it generates samples from noise). User studies confirm that the generated samples are commonly confused to be real images. We illustrate the utility of SinGAN in a wide range of image manipulation tasks.</div></details></td>
        <td>可视化</td>
        <td><a href="">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>25</td>
        <td>MPRNet</td>
        <td><a href="https://paperswithcode.com/paper/multi-stage-progressive-image-restoration">Multi-Stage Progressive Image Restoration</a></td>
        <td><details><summary>Abstract</summary><div>Image restoration tasks demand a complex balance between spatial details and high-level contextualized information while recovering images. In this paper, we propose a novel synergistic design that can optimally balance these competing goals. Our main proposal is a multi-stage architecture, that progressively learns restoration functions for the degraded inputs, thereby breaking down the overall recovery process into more manageable steps. Specifically, our model first learns the contextualized features using encoder-decoder architectures and later combines them with a high-resolution branch that retains local information. At each stage, we introduce a novel per-pixel adaptive design that leverages in-situ supervised attention to reweight the local features. A key ingredient in such a multi-stage architecture is the information exchange between different stages. To this end, we propose a two-faceted approach where the information is not only exchanged sequentially from early to late stages, but lateral connections between feature processing blocks also exist to avoid any loss of information. The resulting tightly interlinked multi-stage architecture, named as MPRNet, delivers strong performance gains on ten datasets across a range of tasks including image deraining, deblurring, and denoising. The source code and pre-trained models are available at https://github.com/swz30/MPRNet.</div></details></td>
        <td>Rain100L/PSNR:36.2848</br> SSIM:0.9651</td>
        <td><a href="">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>26</td>
        <td>StyleCLIP</td>
        <td><a href="https://paperswithcode.com/paper/styleclip-text-driven-manipulation-of">StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery</a></td>
        <td><details><summary>Abstract</summary><div>Inspired by the ability of StyleGAN to generate highly realistic images in a variety of domains, much recent work has focused on understanding how to use the latent spaces of StyleGAN to manipulate generated and real images. However, discovering semantically meaningful latent manipulations typically involves painstaking human examination of the many degrees of freedom, or an annotated collection of images for each desired manipulation. In this work, we explore leveraging the power of recently introduced Contrastive Language-Image Pre-training (CLIP) models in order to develop a text-based interface for StyleGAN image manipulation that does not require such manual effort. We first introduce an optimization scheme that utilizes a CLIP-based loss to modify an input latent vector in response to a user-provided text prompt. Next, we describe a latent mapper that infers a text-guided latent manipulation step for a given input image, allowing faster and more stable text-based manipulation. Finally, we present a method for mapping a text prompts to input-agnostic directions in StyleGAN's style space, enabling interactive text-driven image manipulation. Extensive results and comparisons demonstrate the effectiveness of our approaches.</div></details></td>
        <td>可视化</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/styleganv2clip.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>27</td>
        <td>AotGan</td>
        <td><a href="https://paperswithcode.com/paper/aggregated-contextual-transformations-for">Aggregated Contextual Transformations for High-Resolution Image Inpainting</a></td>
        <td><details><summary>Abstract</summary><div>State-of-the-art image inpainting approaches can suffer from generating distorted structures and blurry textures in high-resolution images (e.g., 512x512). The challenges mainly drive from (1) image content reasoning from distant contexts, and (2) fine-grained texture synthesis for a large missing region. To overcome these two challenges, we propose an enhanced GAN-based model, named Aggregated COntextual-Transformation GAN (AOT-GAN), for high-resolution image inpainting. Specifically, to enhance context reasoning, we construct the generator of AOT-GAN by stacking multiple layers of a proposed AOT block. The AOT blocks aggregate contextual transformations from various receptive fields, allowing to capture both informative distant image contexts and rich patterns of interest for context reasoning. For improving texture synthesis, we enhance the discriminator of AOT-GAN by training it with a tailored mask-prediction task. Such a training objective forces the discriminator to distinguish the detailed appearances of real and synthesized patches, and in turn, facilitates the generator to synthesize clear textures. Extensive comparisons on Places2, the most challenging benchmark with 1.8 million high-resolution images of 365 complex scenes, show that our model outperforms the state-of-the-art by a significant margin in terms of FID with 38.60% relative improvement. A user study including more than 30 subjects further validates the superiority of AOT-GAN. We further evaluate the proposed AOT-GAN in practical applications, e.g., logo removal, face editing, and object removal. Results show that our model achieves promising completions in the real world. We release code and models in https://github.com/researchmm/AOT-GAN-for-Inpainting.</div></details></td>
        <td>Places365验证集 PSNR=26.</br>04</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/aotgan.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>28</td>
        <td>GFPGan</td>
        <td><a href="https://paperswithcode.com/paper/towards-real-world-blind-face-restoration">Towards Real-World Blind Face Restoration with Generative Facial Prior</a></td>
        <td><details><summary>Abstract</summary><div>Blind face restoration usually relies on facial priors, such as facial geometry prior or reference prior, to restore realistic and faithful details. However, very low-quality inputs cannot offer accurate geometric prior while high-quality references are inaccessible, limiting the applicability in real-world scenarios. In this work, we propose GFP-GAN that leverages rich and diverse priors encapsulated in a pretrained face GAN for blind face restoration. This Generative Facial Prior (GFP) is incorporated into the face restoration process via novel channel-split spatial feature transform layers, which allow our method to achieve a good balance of realness and fidelity. Thanks to the powerful generative facial prior and delicate designs, our GFP-GAN could jointly restore facial details and enhance colors with just a single forward pass, while GAN inversion methods require expensive image-specific optimization at inference. Extensive experiments show that our method achieves superior performance to prior art on both synthetic and real-world datasets.</div></details></td>
        <td>CELEBA-HQ测试集 LPIPS=0.</br>38 FID=36.8</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/GFPGAN.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>29</td>
        <td>InvDN</td>
        <td><a href="https://paperswithcode.com/paper/invertible-denoising-network-a-light-solution">Invertible Denoising Network: A Light Solution for Real Noise Removal</a></td>
        <td><details><summary>Abstract</summary><div>Invertible networks have various benefits for image denoising since they are lightweight, information-lossless, and memory-saving during back-propagation. However, applying invertible models to remove noise is challenging because the input is noisy, and the reversed output is clean, following two different distributions. We propose an invertible denoising network, InvDN, to address this challenge. InvDN transforms the noisy input into a low-resolution clean image and a latent representation containing noise. To discard noise and restore the clean image, InvDN replaces the noisy latent representation with another one sampled from a prior distribution during reversion. The denoising performance of InvDN is better than all the existing competitive models, achieving a new state-of-the-art result for the SIDD dataset while enjoying less run time. Moreover, the size of InvDN is far smaller, only having 4.2% of the number of parameters compared to the most recently proposed DANet. Further, via manipulating the noisy latent representation, InvDN is also able to generate noise more similar to the original one. Our code is available at: https://github.com/Yang-Liu1082/InvDN.git.</div></details></td>
        <td>SIDD_valid PSNR=39.29</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/invdn.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>30</td>
        <td>NAFNet</td>
        <td><a href="https://paperswithcode.com/paper/simple-baselines-for-image-restoration">Simple Baselines for Image Restoration</a></td>
        <td><details><summary>Abstract</summary><div>Although there have been significant advances in the field of image restoration recently, the system complexity of the state-of-the-art (SOTA) methods is increasing as well, which may hinder the convenient analysis and comparison of methods. In this paper, we propose a simple baseline that exceeds the SOTA methods and is computationally efficient. To further simplify the baseline, we reveal that the nonlinear activation functions, e.g. Sigmoid, ReLU, GELU, Softmax, etc. are not necessary: they could be replaced by multiplication or removed. Thus, we derive a Nonlinear Activation Free Network, namely NAFNet, from the baseline. SOTA results are achieved on various challenging benchmarks, e.g. 33.69 dB PSNR on GoPro (for image deblurring), exceeding the previous SOTA 0.38 dB with only 8.4% of its computational costs; 40.30 dB PSNR on SIDD (for image denoising), exceeding the previous SOTA 0.28 dB with less than half of its computational costs. The code and the pre-trained models are released at https://github.com/megvii-research/NAFNet.</div></details></td>
        <td>SIDD_valid PSNR=43.15</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/nafnet.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>31</td>
        <td>SwinIR</td>
        <td><a href="https://paperswithcode.com/paper/swinir-image-restoration-using-swin">SwinIR: Image Restoration Using Swin Transformer</a></td>
        <td><details><summary>Abstract</summary><div>Image restoration is a long-standing low-level vision problem that aims to restore high-quality images from low-quality images (e.g., downscaled, noisy and compressed images). While state-of-the-art image restoration methods are based on convolutional neural networks, few attempts have been made with Transformers which show impressive performance on high-level vision tasks. In this paper, we propose a strong baseline model SwinIR for image restoration based on the Swin Transformer. SwinIR consists of three parts: shallow feature extraction, deep feature extraction and high-quality image reconstruction. In particular, the deep feature extraction module is composed of several residual Swin Transformer blocks (RSTB), each of which has several Swin Transformer layers together with a residual connection. We conduct experiments on three representative tasks: image super-resolution (including classical, lightweight and real-world image super-resolution), image denoising (including grayscale and color image denoising) and JPEG compression artifact reduction. Experimental results demonstrate that SwinIR outperforms state-of-the-art methods on different tasks by , while the total number of parameters can be reduced by .</div></details></td>
        <td>CBSD68 PSNR=36.08</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/swinir.md">快速开始</a></td>
        </td>
3647 3648 3649 3650 3651 3652 3653 3654 3655 3656 3657 3658
    </tr>
</table>

### PaddleVideo
<table>
    <tr>
        <th>序号</th>
        <th>模型简称</th>
        <th>论文名称(链接)</th>
        <th>摘要</th>
        <th>数据集</th>
        <th width='10%'>快速开始</th>
3659
        </th>
3660 3661 3662 3663
    </tr>
    <tr>
        <td>1</td>
        <td>PP-TSM</td>
3664
        <td><a href="https://paperswithcode.com/paper/temporal-shift-module-for-efficient-video">TSM: Temporal Shift Module for Efficient Video Understanding</a></td>
3665 3666
        <td><details><summary>Abstract</summary><div>The explosive growth in video streaming gives rise to challenges on performing video understanding at high accu- racy and low computation cost. Conventional 2D CNNs are computationally cheap but cannot capture temporal relationships; 3D CNN based methods can achieve good performance but are computationally intensive, making it expensive to deploy. In this paper, we propose a generic and effective Temporal Shift Module (TSM) that enjoys both high efficiency and high performance. Specifically, it can achieve the performance of 3D CNN but maintain 2D CNN’s complexity. TSM shifts part of the channels along the tempo- ral dimension; thus facilitate information exchanged among neighboring frames. It can be inserted into 2D CNNs to achieve temporal modeling at zero computation and zero parameters. We also extended TSM to online setting, which enables real-time low-latency online video recognition and video object detection. TSM is accurate and efficient: it ranks the first place on the Something-Something leader- board upon publication; on Jetson Nano and Galaxy Note8, it achieves a low latency of 13ms and 35ms for online video recognition.</div></details></td>
        <td>k400, uniform, Top-1:</br> 74.54</td>
3667 3668
        <td><a href="https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/en/model_zoo/recognition/pp-tsm.md">快速开始</a></td>
        </td>
3669 3670
    </tr>
    <tr>
3671
        <td>2</td>
3672
        <td>TSN</td>
3673
        <td><a href="https://paperswithcode.com/paper/temporal-segment-networks-for-action">Temporal Segment Networks for Action Recognition in Video</a></td>
3674 3675
        <td><details><summary>Abstract</summary><div>Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( $ 69.4\% $) and UCF101 ($ 94.2\% $). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices.</div></details></td>
        <td>Top-1: 69.81</td>
3676 3677
        <td><a href="https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/en/model_zoo/recognition/tsn.md">快速开始</a></td>
        </td>
3678 3679
    </tr>
    <tr>
3680
        <td>3</td>
3681
        <td>PP-TSN</td>
3682
        <td><a href="https://paperswithcode.com/paper/temporal-segment-networks-for-action">Temporal Segment Networks for Action Recognition in Video</a></td>
3683 3684
        <td><details><summary>Abstract</summary><div>Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( $ 69.4\% $) and UCF101 ($ 94.2\% $). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices.</div></details></td>
        <td>Top-1: 75.06</td>
3685 3686
        <td><a href="https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/en/model_zoo/recognition/pp-tsn.md">快速开始</a></td>
        </td>
3687 3688
    </tr>
    <tr>
3689
        <td>4</td>
3690
        <td>SlowFast</td>
3691
        <td><a href="https://paperswithcode.com/paper/slowfast-networks-for-video-recognition">SlowFast Networks for Video Recognition</a></td>
3692 3693
        <td><details><summary>Abstract</summary><div>We present SlowFast networks for video recognition. Our model involves (i) a Slow pathway, operating at low frame rate, to capture spatial semantics, and (ii) a Fast pathway, operating at high frame rate, to capture motion at fine temporal resolution. The Fast pathway can be made very lightweight by reducing its channel capacity, yet can learn useful temporal information for video recognition. Our models achieve strong performance for both action classification and detection in video, and large improvements are pin-pointed as contributions by our SlowFast concept. We report state-of-the-art accuracy on major video recognition benchmarks, Kinetics, Charades and AVA. Code has been made available at: https://github.com/facebookresearch/SlowFast</div></details></td>
        <td>k400, Top-1: 74.35</td>
3694 3695
        <td><a href="https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/en/model_zoo/recognition/slowfast.md">快速开始</a></td>
        </td>
3696 3697
    </tr>
    <tr>
3698
        <td>5</td>
3699
        <td>TimeSformer</td>
3700
        <td><a href="https://paperswithcode.com/paper/is-space-time-attention-all-you-need-for">Is Space-Time Attention All You Need for Video Understanding?</a></td>
3701 3702
        <td><details><summary>Abstract</summary><div>We present a convolution-free approach to video classification built exclusively on self-attention over space and time. Our method, named "TimeSformer," adapts the standard Transformer architecture to video by enabling spatiotemporal feature learning directly from a sequence of frame-level patches. Our experimental study compares different self-attention schemes and suggests that "divided attention," where temporal attention and spatial attention are separately applied within each block, leads to the best video classification accuracy among the design choices considered. Despite the radically new design, TimeSformer achieves state-of-the-art results on several action recognition benchmarks, including the best reported accuracy on Kinetics-400 and Kinetics-600. Finally, compared to 3D convolutional networks, our model is faster to train, it can achieve dramatically higher test efficiency (at a small drop in accuracy), and it can also be applied to much longer video clips (over one minute long). Code and models are available at: https://github.com/facebookresearch/TimeSformer.</div></details></td>
        <td>Top-1: 77.29</td>
3703 3704
        <td><a href="https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/en/model_zoo/recognition/timesformer.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
3705 3706
    </tr>
    <tr>
3707
        <td>6</td>
3708
        <td>ST-GCN</td>
3709
        <td><a href="https://paperswithcode.com/paper/spatial-temporal-graph-convolutional-networks-1">Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition</a></td>
3710 3711
        <td><details><summary>Abstract</summary><div>Dynamics of human body skeletons convey significant information for human action recognition. Conventional approaches for modeling skeletons usually rely on hand-crafted parts or traversal rules, thus resulting in limited expressive power and difficulties of generalization. In this work, we propose a novel model of dynamic skeletons called Spatial-Temporal Graph Convolutional Networks (ST-GCN), which moves beyond the limitations of previous methods by automatically learning both the spatial and temporal patterns from data. This formulation not only leads to greater expressive power but also stronger generalization capability. On two large datasets, Kinetics and NTU-RGBD, it achieves substantial improvements over mainstream methods.</div></details></td>
        <td>ntu-rgbd, Top-1: 82.2</br>8</td>
3712 3713
        <td><a href="https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/en/model_zoo/recognition/stgcn.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
3714 3715
    </tr>
    <tr>
3716
        <td>7</td>
3717
        <td>AGCN</td>
3718
        <td><a href="https://paperswithcode.com/paper/skeleton-based-action-recognition-with-multi">Skeleton-Based Action Recognition with Multi-Stream Adaptive Graph Convolutional Networks</a></td>
3719 3720
        <td><details><summary>Abstract</summary><div>Graph convolutional networks (GCNs), which generalize CNNs to more generic non-Euclidean structures, have achieved remarkable performance for skeleton-based action recognition. However, there still exist several issues in the previous GCN-based models. First, the topology of the graph is set heuristically and fixed over all the model layers and input data. This may not be suitable for the hierarchy of the GCN model and the diversity of the data in action recognition tasks. Second, the second-order information of the skeleton data, i.e., the length and orientation of the bones, is rarely investigated, which is naturally more informative and discriminative for the human action recognition. In this work, we propose a novel multi-stream attention-enhanced adaptive graph convolutional neural network (MS-AAGCN) for skeleton-based action recognition. The graph topology in our model can be either uniformly or individually learned based on the input data in an end-to-end manner. This data-driven approach increases the flexibility of the model for graph construction and brings more generality to adapt to various data samples. Besides, the proposed adaptive graph convolutional layer is further enhanced by a spatial-temporal-channel attention module, which helps the model pay more attention to important joints, frames and features. Moreover, the information of both the joints and bones, together with their motion information, are simultaneously modeled in a multi-stream framework, which shows notable improvement for the recognition accuracy. Extensive experiments on the two large-scale datasets, NTU-RGBD and Kinetics-Skeleton, demonstrate that the performance of our model exceeds the state-of-the-art with a significant margin.</div></details></td>
        <td>ntu-rgbd, Top-1: 83.2</br>7</td>
3721 3722
        <td><a href="https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/en/model_zoo/recognition/agcn.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
3723 3724
    </tr>
    <tr>
3725
        <td>8</td>
3726
        <td>BMN</td>
3727
        <td><a href="https://paperswithcode.com/paper/bmn-boundary-matching-network-for-temporal">BMN: Boundary-Matching Network for Temporal Action Proposal Generation</a></td>
3728 3729
        <td><details><summary>Abstract</summary><div>Temporal action proposal generation is an challenging and promising task which aims to locate temporal regions in real-world videos where action or event may occur. Current bottom-up proposal generation methods can generate proposals with precise boundary, but cannot efficiently generate adequately reliable confidence scores for retrieving proposals. To address these difficulties, we introduce the Boundary-Matching (BM) mechanism to evaluate confidence scores of densely distributed proposals, which denote a proposal as a matching pair of starting and ending boundaries and combine all densely distributed BM pairs into the BM confidence map. Based on BM mechanism, we propose an effective, efficient and end-to-end proposal generation method, named Boundary-Matching Network (BMN), which generates proposals with precise temporal boundaries as well as reliable confidence scores simultaneously. The two-branches of BMN are jointly trained in an unified framework. We conduct experiments on two challenging datasets: THUMOS-14 and ActivityNet-1.3, where BMN shows significant performance improvement with remarkable efficiency and generalizability. Further, combining with existing action classifier, BMN can achieve state-of-the-art temporal action detection performance.</div></details></td>
        <td>ActivityNet, AUC: 67.</br>23</td>
3730 3731 3732 3733 3734 3735 3736 3737 3738 3739 3740 3741 3742 3743 3744 3745 3746 3747 3748 3749 3750 3751 3752 3753 3754 3755 3756 3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 3770 3771 3772 3773 3774 3775 3776 3777 3778 3779 3780 3781 3782 3783 3784 3785 3786 3787 3788 3789 3790 3791 3792 3793 3794 3795 3796 3797 3798 3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 3813 3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 3825 3826 3827 3828 3829 3830 3831 3832 3833 3834 3835 3836 3837 3838 3839 3840 3841 3842 3843 3844 3845 3846 3847 3848 3849 3850 3851 3852 3853 3854 3855 3856 3857 3858 3859 3860 3861 3862 3863 3864 3865 3866 3867 3868 3869 3870 3871 3872 3873 3874 3875 3876 3877 3878 3879 3880 3881 3882 3883 3884 3885 3886 3887 3888 3889 3890 3891 3892 3893 3894 3895 3896 3897 3898 3899 3900 3901 3902
        <td><a href="https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/en/model_zoo/localization/bmn.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>9</td>
        <td>Attention LSTM</td>
        <td><a href="https://paperswithcode.com/paper/beyond-short-snippets-deep-networks-for-video">Beyond Short Snippets: Deep Networks for Video Classification</a></td>
        <td><details><summary>Abstract</summary><div>Convolutional neural networks (CNNs) have been exten- sively applied for image recognition problems giving state- of-the-art results on recognition, detection, segmentation and retrieval. In this work we propose and evaluate several deep neural network architectures to combine image infor- mation across a video over longer time periods than previ- ously attempted. We propose two methods capable of han- dling full length videos. The first method explores various convolutional temporal feature pooling architectures, ex- amining the various design choices which need to be made when adapting a CNN for this task. The second proposed method explicitly models the video as an ordered sequence of frames. For this purpose we employ a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN. Our best networks exhibit significant performance improve- ments over previously published results on the Sports 1 mil- lion dataset (73.1% vs. 60.9%) and the UCF-101 datasets with (88.6% vs. 88.0%) and without additional optical flow information (82.6% vs. 73.0%). </div></details></td>
        <td>Youtube8M, Hit@1: 89.</br>0</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/en/model_zoo/recognition/attention_lstm.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>10</td>
        <td>TSM</td>
        <td><a href="https://paperswithcode.com/paper/temporal-shift-module-for-efficient-video">TSM: Temporal Shift Module for Efficient Video Understanding</a></td>
        <td><details><summary>Abstract</summary><div>The explosive growth in video streaming gives rise to challenges on performing video understanding at high accu- racy and low computation cost. Conventional 2D CNNs are computationally cheap but cannot capture temporal relationships; 3D CNN based methods can achieve good performance but are computationally intensive, making it expensive to deploy. In this paper, we propose a generic and effective Temporal Shift Module (TSM) that enjoys both high efficiency and high performance. Specifically, it can achieve the performance of 3D CNN but maintain 2D CNN’s complexity. TSM shifts part of the channels along the tempo- ral dimension; thus facilitate information exchanged among neighboring frames. It can be inserted into 2D CNNs to achieve temporal modeling at zero computation and zero parameters. We also extended TSM to online setting, which enables real-time low-latency online video recognition and video object detection. TSM is accurate and efficient: it ranks the first place on the Something-Something leader- board upon publication; on Jetson Nano and Galaxy Note8, it achieves a low latency of 13ms and 35ms for online video recognition.</div></details></td>
        <td>Top-1: 71.06</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/en/model_zoo/recognition/tsm.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>11</td>
        <td>PP-Timesformer</td>
        <td><a href="https://paperswithcode.com/paper/is-space-time-attention-all-you-need-for">Is Space-Time Attention All You Need for Video Understanding?</a></td>
        <td><details><summary>Abstract</summary><div>We present a convolution-free approach to video classification built exclusively on self-attention over space and time. Our method, named "TimeSformer," adapts the standard Transformer architecture to video by enabling spatiotemporal feature learning directly from a sequence of frame-level patches. Our experimental study compares different self-attention schemes and suggests that "divided attention," where temporal attention and spatial attention are separately applied within each block, leads to the best video classification accuracy among the design choices considered. Despite the radically new design, TimeSformer achieves state-of-the-art results on several action recognition benchmarks, including the best reported accuracy on Kinetics-400 and Kinetics-600. Finally, compared to 3D convolutional networks, our model is faster to train, it can achieve dramatically higher test efficiency (at a small drop in accuracy), and it can also be applied to much longer video clips (over one minute long). Code and models are available at: https://github.com/facebookresearch/TimeSformer.</div></details></td>
        <td>k400, top1=79.44</td>
        <td><a href="">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>12</td>
        <td>VideoSwin</td>
        <td><a href="https://paperswithcode.com/paper/video-swin-transformer">Video Swin Transformer</a></td>
        <td><details><summary>Abstract</summary><div>The vision community is witnessing a modeling shift from CNNs to Transformers, where pure Transformer architectures have attained top accuracy on the major video recognition benchmarks. These video models are all built on Transformer layers that globally connect patches across the spatial and temporal dimensions. In this paper, we instead advocate an inductive bias of locality in video Transformers, which leads to a better speed-accuracy trade-off compared to previous approaches which compute self-attention globally even with spatial-temporal factorization. The locality of the proposed video architecture is realized by adapting the Swin Transformer designed for the image domain, while continuing to leverage the power of pre-trained image models. Our approach achieves state-of-the-art accuracy on a broad range of video recognition benchmarks, including on action recognition (84.9 top-1 accuracy on Kinetics-400 and 86.1 top-1 accuracy on Kinetics-600 with ~20x less pre-training data and ~3x smaller model size) and temporal modeling (69.6 top-1 accuracy on Something-Something v2). The code and models will be made publicly available at https://github.com/SwinTransformer/Video-Swin-Transformer.</div></details></td>
        <td>k400, top1=82.4</td>
        <td><a href="">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>13</td>
        <td>MoViNets</td>
        <td><a href="https://paperswithcode.com/paper/movinets-mobile-video-networks-for-efficient">MoViNets: Mobile Video Networks for Efficient Video Recognition</a></td>
        <td><details><summary>Abstract</summary><div>We present Mobile Video Networks (MoViNets), a family of computation and memory efficient video networks that can operate on streaming video for online inference. 3D convolutional neural networks (CNNs) are accurate at video recognition but require large computation and memory budgets and do not support online inference, making them difficult to work on mobile devices. We propose a three-step approach to improve computational efficiency while substantially reducing the peak memory usage of 3D CNNs. First, we design a video network search space and employ neural architecture search to generate efficient and diverse 3D CNN architectures. Second, we introduce the Stream Buffer technique that decouples memory from video clip duration, allowing 3D CNNs to embed arbitrary-length streaming video sequences for both training and inference with a small constant memory footprint. Third, we propose a simple ensembling technique to improve accuracy further without sacrificing efficiency. These three progressive techniques allow MoViNets to achieve state-of-the-art accuracy and efficiency on the Kinetics, Moments in Time, and Charades video action recognition datasets. For instance, MoViNet-A5-Stream achieves the same accuracy as X3D-XL on Kinetics 600 while requiring 80% fewer FLOPs and 65% less memory. Code will be made available at https://github.com/tensorflow/models/tree/master/official/vision.</div></details></td>
        <td>k400, top1=66.62</td>
        <td><a href="">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>14</td>
        <td>CTR-GCN</td>
        <td><a href="https://paperswithcode.com/paper/channel-wise-topology-refinement-graph">Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition,</a></td>
        <td><details><summary>Abstract</summary><div>Graph convolutional networks (GCNs) have been widely used and achieved remarkable results in skeleton-based action recognition. In GCNs, graph topology dominates feature aggregation and therefore is the key to extracting representative features. In this work, we propose a novel Channel-wise Topology Refinement Graph Convolution (CTR-GC) to dynamically learn different topologies and effectively aggregate joint features in different channels for skeleton-based action recognition. The proposed CTR-GC models channel-wise topologies through learning a shared topology as a generic prior for all channels and refining it with channel-specific correlations for each channel. Our refinement method introduces few extra parameters and significantly reduces the difficulty of modeling channel-wise topologies. Furthermore, via reformulating graph convolutions into a unified form, we find that CTR-GC relaxes strict constraints of graph convolutions, leading to stronger representation capability. Combining CTR-GC with temporal modeling modules, we develop a powerful graph convolutional network named CTR-GCN which notably outperforms state-of-the-art methods on the NTU RGB+D, NTU RGB+D 120, and NW-UCLA datasets.</div></details></td>
        <td>NTU-RGBD, xs,joint,to</br>p1=89.93</td>
        <td><a href="">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>15</td>
        <td>MS-TCN</td>
        <td><a href="https://paperswithcode.com/paper/ms-tcn-multi-stage-temporal-convolutional">MS-TCN: Multi-Stage Temporal Convolutional Network for Action Segmentation</a></td>
        <td><details><summary>Abstract</summary><div>Temporally locating and classifying action segments in long untrimmed videos is of particular interest to many applications like surveillance and robotics. While traditional approaches follow a two-step pipeline, by generating frame-wise probabilities and then feeding them to high-level temporal models, recent approaches use temporal convolutions to directly classify the video frames. In this paper, we introduce a multi-stage architecture for the temporal action segmentation task. Each stage features a set of dilated temporal convolutions to generate an initial prediction that is refined by the next one. This architecture is trained using a combination of a classification loss and a proposed smoothing loss that penalizes over-segmentation errors. Extensive evaluation shows the effectiveness of the proposed model in capturing long-range dependencies and recognizing action segments. Our model achieves state-of-the-art results on three challenging datasets: 50Salads, Georgia Tech Egocentric Activities (GTEA), and the Breakfast dataset.</div></details></td>
        <td>50salads,acc=81.8</td>
        <td><a href="">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>16</td>
        <td>ASRF</td>
        <td><a href="https://paperswithcode.com/paper/alleviating-over-segmentation-errors-by">Alleviating Over-segmentation Errors by Detecting Action Boundaries</a></td>
        <td><details><summary>Abstract</summary><div>We propose an effective framework for the temporal action segmentation task, namely an Action Segment Refinement Framework (ASRF). Our model architecture consists of a long-term feature extractor and two branches: the Action Segmentation Branch (ASB) and the Boundary Regression Branch (BRB). The long-term feature extractor provides shared features for the two branches with a wide temporal receptive field. The ASB classifies video frames with action classes, while the BRB regresses the action boundary probabilities. The action boundaries predicted by the BRB refine the output from the ASB, which results in a significant performance improvement. Our contributions are three-fold: (i) We propose a framework for temporal action segmentation, the ASRF, which divides temporal action segmentation into frame-wise action classification and action boundary regression. Our framework refines frame-level hypotheses of action classes using predicted action boundaries. (ii) We propose a loss function for smoothing the transition of action probabilities, and analyze combinations of various loss functions for temporal action segmentation. (iii) Our framework outperforms state-of-the-art methods on three challenging datasets, offering an improvement of up to 13.7% in terms of segmental edit distance and up to 16.1% in terms of segmental F1 score. Our code will be publicly available soon.</div></details></td>
        <td>50salads, 81.6</td>
        <td><a href="">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>17</td>
        <td>Slowfast+FastRCNN</td>
        <td><a href="https://paperswithcode.com/paper/slowfast-networks-for-video-recognition">SlowFast Networks for Video Recognition</a></td>
        <td><details><summary>Abstract</summary><div>We present SlowFast networks for video recognition. Our model involves (i) a Slow pathway, operating at low frame rate, to capture spatial semantics, and (ii) a Fast pathway, operating at high frame rate, to capture motion at fine temporal resolution. The Fast pathway can be made very lightweight by reducing its channel capacity, yet can learn useful temporal information for video recognition. Our models achieve strong performance for both action classification and detection in video, and large improvements are pin-pointed as contributions by our SlowFast concept. We report state-of-the-art accuracy on major video recognition benchmarks, Kinetics, Charades and AVA. Code has been made available at: https://github.com/facebookresearch/SlowFast</div></details></td>
        <td>AVA, map=23.2</td>
        <td><a href="">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>18</td>
        <td>ADDS</td>
        <td><a href="https://paperswithcode.com/paper/self-supervised-monocular-depth-estimation-2">Self-supervised Monocular Depth Estimation for All Day Images using Domain Separation</a></td>
        <td><details><summary>Abstract</summary><div>Remarkable results have been achieved by DCNN based self-supervised depth estimation approaches. However, most of these approaches can only handle either day-time or night-time images, while their performance degrades for all-day images due to large domain shift and the variation of illumination between day and night images. To relieve these limitations, we propose a domain-separated network for self-supervised depth estimation of all-day images. Specifically, to relieve the negative influence of disturbing terms (illumination, etc.), we partition the information of day and night image pairs into two complementary sub-spaces: private and invariant domains, where the former contains the unique information (illumination, etc.) of day and night images and the latter contains essential shared information (texture, etc.). Meanwhile, to guarantee that the day and night images contain the same information, the domain-separated network takes the day-time images and corresponding night-time images (generated by GAN) as input, and the private and invariant feature extractors are learned by orthogonality and similarity loss, where the domain gap can be alleviated, thus better depth maps can be expected. Meanwhile, the reconstruction and photometric losses are utilized to estimate complementary information and depth maps effectively. Experimental results demonstrate that our approach achieves state-of-the-art depth estimation results for all-day images on the challenging Oxford RobotCar dataset, proving the superiority of our proposed approach.</div></details></td>
        <td>Oxford RobotCar, nigh</br>t, max-depth 40, AbsRel=0.209</td>
        <td><a href="">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>19</td>
        <td>ActBERT</td>
        <td><a href="https://paperswithcode.com/paper/actbert-learning-global-local-video-text-1">ActBERT: Learning Global-Local Video-Text Representations </a></td>
        <td><details><summary>Abstract</summary><div>In this paper, we introduce ActBERT for self-supervised learning of joint video-text representations from unlabeled data. First, we leverage global action information to catalyze the mutual interactions between linguistic texts and local regional objects. It uncovers global and local visual clues from paired video sequences and text descriptions for detailed visual and text relation modeling. Second, we introduce an ENtangled Transformer block (ENT) to encode three sources of information, i.e., global actions, local regional objects, and linguistic descriptions. Global-local correspondences are discovered via judicious clues extraction from contextual information. It enforces the joint videotext representation to be aware of fine-grained objects as well as global human intention. We validate the generalization capability of ActBERT on downstream video-and language tasks, i.e., text-video clip retrieval, video captioning, video question answering, action segmentation, and action step localization. ActBERT significantly outperforms the state-of-the-arts, demonstrating its superiority in video-text representation learning.</div></details></td>
        <td>暂无</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleVideo">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>20</td>
        <td>T2VLAD</td>
        <td><a href="https://paperswithcode.com/paper/t2vlad-global-local-sequence-alignment-for">T2VLAD: Global-Local Sequence Alignment for Text-Video Retrieval </a></td>
        <td><details><summary>Abstract</summary><div>Text-video retrieval is a challenging task that aims to search relevant video contents based on natural language descriptions. The key to this problem is to measure text-video similarities in a joint embedding space. However, most existing methods only consider the global cross-modal similarity and overlook the local details. Some works incorporate the local comparisons through cross-modal local matching and reasoning. These complex operations introduce tremendous computation. In this paper, we design an efficient global-local alignment method. The multi-modal video sequences and text features are adaptively aggregated with a set of shared semantic centers. The local cross-modal similarities are computed between the video feature and text feature within the same center. This design enables the meticulous local comparison and reduces the computational cost of the interaction between each text-video pair. Moreover, a global alignment method is proposed to provide a global cross-modal measurement that is complementary to the local perspective. The global aggregated visual features also provide additional supervision, which is indispensable to the optimization of the learnable semantic centers. We achieve consistent improvements on three standard text-video retrieval benchmarks and outperform the state-of-the-art by a clear margin.</div></details></td>
        <td>暂无</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleVideo">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>21</td>
        <td>CFBI</td>
        <td><a href="https://paperswithcode.com/paper/collaborative-video-object-segmentation-by">Collaborative Video Object Segmentation by Foreground-Background Integration</a></td>
        <td><details><summary>Abstract</summary><div>This paper investigates the principles of embedding learning to tackle the challenging semi-supervised video object segmentation. Different from previous practices that only explore the embedding learning using pixels from foreground object (s), we consider background should be equally treated and thus propose Collaborative video object segmentation by Foreground-Background Integration (CFBI) approach. Our CFBI implicitly imposes the feature embedding from the target foreground object and its corresponding background to be contrastive, promoting the segmentation results accordingly. With the feature embedding from both foreground and background, our CFBI performs the matching process between the reference and the predicted sequence from both pixel and instance levels, making the CFBI be robust to various object scales. We conduct extensive experiments on three popular benchmarks, i.e., DAVIS 2016, DAVIS 2017, and YouTube-VOS. Our CFBI achieves the performance (J$F) of 89.4%, 81.9%, and 81.4%, respectively, outperforming all the other state-of-the-art methods. Code: https://github.com/z-x-yang/CFBI.</div></details></td>
        <td>暂无</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleVideo">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>22</td>
        <td>MA-Net</td>
        <td><a href="https://paperswithcode.com/paper/memory-aggregation-networks-for-efficient">Memory aggregation networks for efficient interactive video object segmentation</a></td>
        <td><details><summary>Abstract</summary><div>Interactive video object segmentation (iVOS) aims at efficiently harvesting high-quality segmentation masks of the target object in a video with user interactions. Most previous state-of-the-arts tackle the iVOS with two independent networks for conducting user interaction and temporal propagation, respectively, leading to inefficiencies during the inference stage. In this work, we propose a unified framework, named Memory Aggregation Networks (MA-Net), to address the challenging iVOS in a more efficient way. Our MA-Net integrates the interaction and the propagation operations into a single network, which significantly promotes the efficiency of iVOS in the scheme of multi-round interactions. More importantly, we propose a simple yet effective memory aggregation mechanism to record the informative knowledge from the previous interaction rounds, improving the robustness in discovering challenging objects of interest greatly. We conduct extensive experiments on the validation set of DAVIS Challenge 2018 benchmark. In particular, our MA-Net achieves the J@60 score of 76.1% without any bells and whistles, outperforming the state-of-the-arts with more than 2.7%.</div></details></td>
        <td>暂无</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleVideo">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>23</td>
        <td>PP-TSMv2</td>
        <td><a href="https://paperswithcode.com/paper/temporal-shift-module-for-efficient-video">TSM: Temporal Shift Module for Efficient Video Understanding</a></td>
        <td><details><summary>Abstract</summary><div>The explosive growth in video streaming gives rise to challenges on performing video understanding at high accu- racy and low computation cost. Conventional 2D CNNs are computationally cheap but cannot capture temporal relationships; 3D CNN based methods can achieve good performance but are computationally intensive, making it expensive to deploy. In this paper, we propose a generic and effective Temporal Shift Module (TSM) that enjoys both high efficiency and high performance. Specifically, it can achieve the performance of 3D CNN but maintain 2D CNN’s complexity. TSM shifts part of the channels along the tempo- ral dimension; thus facilitate information exchanged among neighboring frames. It can be inserted into 2D CNNs to achieve temporal modeling at zero computation and zero parameters. We also extended TSM to online setting, which enables real-time low-latency online video recognition and video object detection. TSM is accurate and efficient: it ranks the first place on the Something-Something leader- board upon publication; on Jetson Nano and Galaxy Note8, it achieves a low latency of 13ms and 35ms for online video recognition.</div></details></td>
        <td>k400, uniform, Top-1:</br> 75.16</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/en/model_zoo/recognition/pp-tsm.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>24</td>
        <td>TokenShift</td>
        <td><a href="https://paperswithcode.com/paper/token-shift-transformer-for-video">Token Shift Transformer for Video Classification</a></td>
        <td><details><summary>Abstract</summary><div>Transformer achieves remarkable successes in understanding 1 and 2-dimensional signals (e.g., NLP and Image Content Understanding). As a potential alternative to convolutional neural networks, it shares merits of strong interpretability, high discriminative power on hyper-scale data, and flexibility in processing varying length inputs. However, its encoders naturally contain computational intensive operations such as pair-wise self-attention, incurring heavy computational burden when being applied on the complex 3-dimensional video signals. This paper presents Token Shift Module (i.e., TokShift), a novel, zero-parameter, zero-FLOPs operator, for modeling temporal relations within each transformer encoder. Specifically, the TokShift barely temporally shifts partial [Class] token features back-and-forth across adjacent frames. Then, we densely plug the module into each encoder of a plain 2D vision transformer for learning 3D video representation. It is worth noticing that our TokShift transformer is a pure convolutional-free video transformer pilot with computational efficiency for video understanding. Experiments on standard benchmarks verify its robustness, effectiveness, and efficiency. Particularly, with input clips of 8/12 frames, the TokShift transformer achieves SOTA precision: 79.83%/80.40% on the Kinetics-400, 66.56% on EGTEA-Gaze+, and 96.80% on UCF-101 datasets, comparable or better than existing SOTA convolutional counterparts. Our code is open-sourced in: https://github.com/VideoNetworks/TokShift-Transformer.</div></details></td>
        <td>UCF101,TOP1: 92.81</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/en/model_zoo/recognition/tokenshift_transformer.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>25</td>
        <td>2s-AGCN</td>
        <td><a href="https://paperswithcode.com/paper/non-local-graph-convolutional-networks-for">Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition</a></td>
        <td><details><summary>Abstract</summary><div>In skeleton-based action recognition, graph convolutional networks (GCNs), which model the human body skeletons as spatiotemporal graphs, have achieved remarkable performance. However, in existing GCN-based methods, the topology of the graph is set manually, and it is fixed over all layers and input samples. This may not be optimal for the hierarchical GCN and diverse samples in action recognition tasks. In addition, the second-order information (the lengths and directions of bones) of the skeleton data, which is naturally more informative and discriminative for action recognition, is rarely investigated in existing methods. In this work, we propose a novel two-stream adaptive graph convolutional network (2s-AGCN) for skeleton-based action recognition. The topology of the graph in our model can be either uniformly or individually learned by the BP algorithm in an end-to-end manner. This data-driven method increases the flexibility of the model for graph construction and brings more generality to adapt to various data samples. Moreover, a two-stream framework is proposed to model both the first-order and the second-order information simultaneously, which shows notable improvement for the recognition accuracy. Extensive experiments on the two large-scale datasets, NTU-RGBD and Kinetics-Skeleton, demonstrate that the performance of our model exceeds the state-of-the-art with a significant margin.</div></details></td>
        <td>NTU-RGBD, joint, CS, </br>top1=85.8</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/en/model_zoo/recognition/agcn2s.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>26</td>
        <td>YOWO</td>
        <td><a href="https://paperswithcode.com/paper/you-only-watch-once-a-unified-cnn">You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization</a></td>
        <td><details><summary>Abstract</summary><div>Spatiotemporal action localization requires the incorporation of two sources of information into the designed architecture: (1) temporal information from the previous frames and (2) spatial information from the key frame. Current state-of-the-art approaches usually extract these information with separate networks and use an extra mechanism for fusion to get detections. In this work, we present YOWO, a unified CNN architecture for real-time spatiotemporal action localization in video streams. YOWO is a single-stage architecture with two branches to extract temporal and spatial information concurrently and predict bounding boxes and action probabilities directly from video clips in one evaluation. Since the whole architecture is unified, it can be optimized end-to-end. The YOWO architecture is fast providing 34 frames-per-second on 16-frames input clips and 62 frames-per-second on 8-frames input clips, which is currently the fastest state-of-the-art architecture on spatiotemporal action localization task. Remarkably, YOWO outperforms the previous state-of-the art results on J-HMDB-21 and UCF101-24 with an impressive improvement of ~3% and ~12%, respectively. Moreover, YOWO is the first and only single-stage architecture that provides competitive results on AVA dataset. We make our code and pretrained models publicly available.</div></details></td>
        <td>UCF101-24, mAP=80.94</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/en/model_zoo/localization/yowo.md">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>27</td>
        <td>PoseC3D</td>
        <td><a href="https://paperswithcode.com/paper/revisiting-skeleton-based-action-recognition">Revisiting Skeleton-based Action Recognition</a></td>
        <td><details><summary>Abstract</summary><div>Human skeleton, as a compact representation of human action, has received increasing attention in recent years. Many skeleton-based action recognition methods adopt graph convolutional networks (GCN) to extract features on top of human skeletons. Despite the positive results shown in previous works, GCN-based methods are subject to limitations in robustness, interoperability, and scalability. In this work, we propose PoseC3D, a new approach to skeleton-based action recognition, which relies on a 3D heatmap stack instead of a graph sequence as the base representation of human skeletons. Compared to GCN-based methods, PoseC3D is more effective in learning spatiotemporal features, more robust against pose estimation noises, and generalizes better in cross-dataset settings. Also, PoseC3D can handle multiple-person scenarios without additional computation cost, and its features can be easily integrated with other modalities at early fusion stages, which provides a great design space to further boost the performance. On four challenging datasets, PoseC3D consistently obtains superior performance, when used alone on skeletons and in combination with the RGB modality.</div></details></td>
        <td>UCF101-Skeleton, top1</br>=87.05</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleVideo/blob/develop/docs/en/model_zoo/recognition/posec3d.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
3903
    </tr>
3904 3905 3906 3907
</table>

### PaddleNLP
<table>
C
Chen Long 已提交
3908
    <tr>
3909 3910 3911 3912 3913 3914
        <th>序号</th>
        <th>模型简称</th>
        <th>论文名称(链接)</th>
        <th>摘要</th>
        <th>数据集</th>
        <th width='10%'>快速开始</th>
3915
        </th>
C
Chen Long 已提交
3916 3917
    </tr>
    <tr>
3918 3919
        <td>1</td>
        <td>IGSQL</td>
3920
        <td><a href="https://paperswithcode.com/paper/igsql-database-schema-interaction-graph-based"> IGSQL: Database Schema Interaction Graph Based Neural Model for Context-Dependent Text-to-SQL Generation</a></td>
3921 3922
        <td><details><summary>Abstract</summary><div>Context-dependent text-to-SQL task has drawn much attention in recent years. Previous models on context-dependent text-to-SQL task only concentrate on utilizing historical user inputs. In this work, in addition to using encoders to capture historical information of user inputs, we propose a database schema interaction graph encoder to utilize historicalal information of database schema items. In decoding phase, we introduce a gate mechanism to weigh the importance of different vocabularies and then make the prediction of SQL tokens. We evaluate our model on the benchmark SParC and CoSQL datasets, which are two large complex context-dependent cross-domain text-to-SQL datasets. Our model outperforms previous state-of-the-art model by a large margin and achieves new state-of-the-art results on the two datasets. The comparison and ablation results demonstrate the efficacy of our model and the usefulness of the database schema interaction graph encoder.</div></details></td>
        <td> CoSQL Test /  questi</br>on match accuracy: 42.5 / interaction match accuracy: 15.0</td>
3923 3924
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
3925 3926
    </tr>
    <tr>
3927 3928
        <td>2</td>
        <td>RAT-SQL</td>
3929
        <td><a href="https://paperswithcode.com/paper/rat-sql-relation-aware-schema-encoding-and-1">RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers</a></td>
3930 3931
        <td><details><summary>Abstract</summary><div>When translating natural language questions into SQL queries to answer questions from a database, contemporary semantic parsing models struggle to generalize to unseen database schemas. The generalization challenge lies in (a) encoding the database relations in an accessible way for the semantic parser, and (b) modeling alignment between database columns and their mentions in a given query. We present a unified framework, based on the relation-aware self-attention mechanism, to address schema encoding, schema linking, and feature representation within a text-to-SQL encoder. On the challenging Spider dataset this framework boosts the exact match accuracy to 57.2%, surpassing its best counterparts by 8.7% absolute improvement. Further augmented with BERT, it achieves the new state-of-the-art performance of 65.6% on the Spider leaderboard. In addition, we observe qualitative improvements in the model's understanding of schema linking and alignment. Our implementation will be open-sourced at this https URL.</div></details></td>
        <td>DuSQL: 64.3</td>
3932 3933
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
3934 3935
    </tr>
    <tr>
3936 3937
        <td>3</td>
        <td>BiGRU-CRF</td>
3938
        <td><a href="https://paperswithcode.com/paper/chinese-lexical-analysis-with-deep-bi-gru-crf#code">Chinese Lexical Analysis with Deep Bi-GRU-CRF Network</a></td>
3939 3940
        <td><details><summary>Abstract</summary><div>Lexical analysis is believed to be a crucial step towards natural language understanding and has been widely studied. Recent years, end-to-end lexical analysis models with recurrent neural networks have gained increasing attention. In this report, we introduce a deep Bi-GRU-CRF network that jointly models word segmentation, part-of-speech tagging and named entity recognition tasks. We trained the model using several massive corpus pre-tagged by our best Chinese lexical analysis tool, together with a small, yet high-quality human annotated corpus. We conducted balanced sampling between different corpora to guarantee the influence of human annotations, and fine-tune the CRF decoding layer regularly during the training progress. As evaluated by linguistic experts, the model achieved a 95.5% accuracy on the test set, roughly 13% relative error reduction over our (previously) best Chinese lexical analysis tool. The model is computationally efficient, achieving the speed of 2.3K characters per second with one thread.</div></details></td>
        <td>数据集未开源</td>
3941 3942
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
3943 3944
    </tr>
    <tr>
3945 3946
        <td>4</td>
        <td>Deep Biaffine Attenti</br>on</td>
3947
        <td><a href="https://paperswithcode.com/paper/deep-biaffine-attention-for-neural-dependency">Deep Biaffine Attention for Neural Dependency Parsing</a></td>
3948 3949
        <td><details><summary>Abstract</summary><div>This paper builds off recent work from Kiperwasser & Goldberg (2016) using neural attention in a simple graph-based dependency parser. We use a larger but more thoroughly regularized parser than other recent BiLSTM-based approaches, with biaffine classifiers to predict arcs and labels. Our parser gets state of the art or near state of the art performance on standard treebanks for six different languages, achieving 95.7% UAS and 94.1% LAS on the most popular English PTB dataset. This makes it the highest-performing graph-based parser on this benchmark---outperforming Kiperwasser Goldberg (2016) by 1.8% and 2.2%---and comparable to the highest performing transition-based parser (Kuncoro et al., 2016), which achieves 95.8% UAS and 94.6% LAS. We also show which hyperparameter choices had a significant effect on parsing accuracy, allowing us to achieve large gains over other graph-based approaches.</div></details></td>
        <td>NLPCC2013_EVSAM05_THU</br> UAS: 92.20 LAS: 85.10</td>
3950 3951
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
3952 3953
    </tr>
    <tr>
3954 3955
        <td>5</td>
        <td>ERNIE-CSC</td>
3956
        <td><a href="https://paperswithcode.com/paper/correcting-chinese-spelling-errors-with">Correcting Chinese Spelling Errors with Phonetic Pre-training</a></td>
3957 3958
        <td><details><summary>Abstract</summary><div>Chinese spelling correction (CSC) is an important yet challenging task. Existing state-ofthe-art methods either only use a pre-trained language model or incorporate phonological information as external knowledge. In this paper, we propose a novel end-to-end CSC model that integrates phonetic features into language model by leveraging the powerful pre-training and fine-tuning method. Instead of conventionally masking words with a special token in training language model, we replace words with phonetic features and their sound-alike words. We further propose an adaptive weighted objective to jointly train error detection and correction in a unified framework. Experimental results show that our model achieves significant improvements on SIGHAN datasets and outperforms the previous state-of-the-art methods.</div></details></td>
        <td>SIGHAN 13/ Detection </br>F1: 0.8348 Correction F1: 0.8217</td>
3959 3960
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
3961 3962
    </tr>
    <tr>
3963 3964
        <td>6</td>
        <td>PLATO-2</td>
3965
        <td><a href="https://paperswithcode.com/paper/plato-2-towards-building-an-open-domain">PLATO-2: Towards Building an Open-Domain Chatbot via Curriculum Learning</a></td>
3966 3967
        <td><details><summary>Abstract</summary><div>To build a high-quality open-domain chatbot, we introduce the effective training process of PLATO-2 via curriculum learning. There are two stages involved in the learning process. In the first stage, a coarse-grained generation model is trained to learn response generation under the simplified framework of one-to-one mapping. In the second stage, a fine-grained generative model augmented with latent variables and an evaluation model are further trained to generate diverse responses and to select the best response, respectively. PLATO-2 was trained on both Chinese and English data, whose effectiveness and superiority are verified through comprehensive evaluations, achieving new state-of-the-art results.</div></details></td>
        <td>Self-chat / Distinct-</br>1: 0.169 / Distinct-2: 0.613</td>
3968 3969
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
3970 3971
    </tr>
    <tr>
3972 3973
        <td>7</td>
        <td>Seq2Seq</td>
3974
        <td><a href="Neural Machine Translation By Jointly Learning To Align And Translate">Neural Machine Translation By Jointly Learning To Align And Translate</a></td>
3975 3976
        <td><details><summary>Abstract</summary><div>Neural Machine Translation By Jointly Learning To Align And Translate</div></details></td>
        <td>IWSLT 15 en-vi翻译模型 / </br>BLEU: 24.33</td>
3977 3978
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
3979 3980
    </tr>
    <tr>
3981 3982 3983 3984 3985
        <td>8</td>
        <td>Transformer</td>
        <td><a href="https://paperswithcode.com/paper/attention-is-all-you-need">attention is all you need</a></td>
        <td><details><summary>Abstract</summary><div>The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.</div></details></td>
        <td>WMT14 en-de / Transfo</br>rmer base / BLEU: 27.3</td>
3986 3987
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
3988 3989
    </tr>
    <tr>
3990 3991
        <td>9</td>
        <td>STACL</td>
3992
        <td><a href="https://paperswithcode.com/paper/stacl-simultaneous-translation-with">STACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix Framework</a></td>
3993 3994
        <td><details><summary>Abstract</summary><div>Simultaneous translation, which translates sentences before they are finished, is use- ful in many scenarios but is notoriously dif- ficult due to word-order differences. While the conventional seq-to-seq framework is only suitable for full-sentence translation, we pro- pose a novel prefix-to-prefix framework for si- multaneous translation that implicitly learns to anticipate in a single translation model. Within this framework, we present a very sim- ple yet surprisingly effective “wait-k” policy trained to generate the target sentence concur- rently with the source sentence, but always k words behind. Experiments show our strat- egy achieves low latency and reasonable qual- ity (compared to full-sentence translation) on 4 directions: zh↔en and de↔en.</div></details></td>
        <td>Wait-3 BLEU: 34.24</td>
3995 3996
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
3997 3998
    </tr>
    <tr>
3999 4000
        <td>10</td>
        <td>SKEP</td>
4001
        <td><a href="https://paperswithcode.com/paper/skep-sentiment-knowledge-enhanced-pre">SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis</a></td>
4002 4003
        <td><details><summary>Abstract</summary><div>Recently, sentiment analysis has seen remarkable advance with the help of pre-training approaches. However, sentiment knowledge, such as sentiment words and aspect-sentiment pairs, is ignored in the process of pre-training, despite the fact that they are widely used in traditional sentiment analysis approaches. In this paper, we introduce Sentiment Knowledge Enhanced Pre-training (SKEP) in order to learn a unified sentiment representation for multiple sentiment analysis tasks. With the help of automatically-mined knowledge, SKEP conducts sentiment masking and constructs three sentiment knowledge prediction objectives, so as to embed sentiment information at the word, polarity and aspect level into pre-trained sentiment representation. In particular, the prediction of aspect-sentiment pairs is converted into multi-label classification, aiming to capture the dependency between words in a pair. Experiments on three kinds of sentiment tasks show that SKEP significantly outperforms strong pre-training baseline, and achieves new state-of-the-art results on most of the test datasets. We release our code at this https URL.</div></details></td>
        <td>SST-2 / acc: 97.60</td>
4004 4005
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4006 4007
    </tr>
    <tr>
4008 4009
        <td>11</td>
        <td>Sentence-Transformer</td>
4010
        <td><a href="https://paperswithcode.com/paper/sentence-bert-sentence-embeddings-using">Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks</a></td>
4011 4012
        <td><details><summary>Abstract</summary><div>BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new state-of-the-art performance on sentence-pair regression tasks like semantic textual similarity (STS). However, it requires that both sentences are fed into the network, which causes a massive computational overhead: Finding the most similar pair in a collection of 10,000 sentences requires about 50 million inference computations (~65 hours) with BERT. The construction of BERT makes it unsuitable for semantic similarity search as well as for unsupervised tasks like clustering.In this publication, we present Sentence-BERT (SBERT), a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65 hours with BERT / RoBERTa to about 5 seconds with SBERT, while maintaining the accuracy from BERT. We evaluate SBERT and SRoBERTa on common STS tasks and transfer learning tasks, where it outperforms other state-of-the-art sentence embeddings methods.</div></details></td>
        <td>SST / SBERT-NLI-large</br> / 90.66</td>
4013 4014
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4015 4016
    </tr>
    <tr>
4017
        <td>12</td>
4018 4019 4020 4021
        <td>EFL</td>
        <td><a href="https://paperswithcode.com/paper/entailment-as-few-shot-learner">Entailment as Few-Shot Learner</a></td>
        <td><details><summary>Abstract</summary><div>Large pre-trained language models (LMs) have demonstrated remarkable ability as few-shot learners. However, their success hinges largely on scaling model parameters to a degree that makes it challenging to train and serve. In this paper, we propose a new approach, named as EFL, that can turn small LMs into better few-shot learners. The key idea of this approach is to reformulate potential NLP task into an entailment one, and then fine-tune the model with as little as 8 examples. We further demonstrate our proposed method can be: (i) naturally combined with an unsupervised contrastive learning-based data augmentation method; (ii) easily extended to multilingual few-shot learning. A systematic evaluation on 18 standard NLP tasks demonstrates that this approach improves the various existing SOTA few-shot learning methods by 12\%, and yields competitive few-shot performance with 500 times larger models, such as GPT-3.</div></details></td>
        <td>SST-2 / acc: 90.8</td>
4022 4023
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4024 4025
    </tr>
    <tr>
4026
        <td>13</td>
4027
        <td>PET</td>
4028
        <td><a href="https://paperswithcode.com/paper/exploiting-cloze-questions-for-few-shot-text-1">Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference</a></td>
4029 4030
        <td><details><summary>Abstract</summary><div>Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with "task descriptions" in natural language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this work that the two ideas can be combined: We introduce Pattern-Exploiting Training (PET), a semi-supervised training procedure that reformulates input examples as cloze-style phrases to help language models understand a given task. These phrases are then used to assign soft labels to a large set of unlabeled examples. Finally, standard supervised training is performed on the resulting training set. For several tasks and languages, PET outperforms supervised training and strong semi-supervised approaches in low-resource settings by a large margin.</div></details></td>
        <td>MNLI/acc:85.3(m)</td>
4031 4032
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4033 4034
    </tr>
    <tr>
4035
        <td>14</td>
4036 4037 4038 4039
        <td>P-Tuning</td>
        <td><a href="https://paperswithcode.com/paper/gpt-understands-too">GPT Understands, Too,</a></td>
        <td><details><summary>Abstract</summary><div>While GPTs with traditional fine-tuning fail to achieve strong results on natural language understanding (NLU), we show that GPTs can be better than or comparable to similar-sized BERTs on NLU tasks with a novel method P-tuning -- which employs trainable continuous prompt embeddings. On the knowledge probing (LAMA) benchmark, the best GPT recovers 64\% (P@1) of world knowledge without any additional text provided during test time, which substantially improves the previous best by 20+ percentage points. On the SuperGlue benchmark, GPTs achieve comparable and sometimes better performance to similar-sized BERTs in supervised learning. Importantly, we find that P-tuning also improves BERTs' performance in both few-shot and supervised settings while largely reducing the need for prompt engineering. Consequently, P-tuning outperforms the state-of-the-art approaches on the few-shot SuperGlue benchmark.</div></details></td>
        <td>BoolQ/acc:77.8</td>
4040 4041
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4042 4043
    </tr>
    <tr>
4044
        <td>15</td>
4045
        <td>Pointer Generator Net</br>work</td>
4046
        <td><a href="https://paperswithcode.com/paper/get-to-the-point-summarization-with-pointer">Get To The Point: Summarization with Pointer-Generator Networks</a></td>
4047 4048
        <td><details><summary>Abstract</summary><div>Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.</div></details></td>
        <td>CNN/DailyMail / Rouge</br>-L: 39.53</td>
4049 4050
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4051 4052
    </tr>
    <tr>
4053 4054 4055
        <td>16</td>
        <td>ERNIE 3.0</td>
        <td><a href="https://paperswithcode.com/paper/ernie-enhanced-representation-through">ERNIE: Enhanced Representation through Knowledge Integration</a></td>
4056 4057
        <td><details><summary>Abstract</summary><div>We present a novel language representationmodel enhanced by knowledge called ERNIE(Enhanced Representation through kNowledge IntEgration). Inspired by the masking strategy of BERT (Devlin et al., 2018),ERNIE is designed to learn language representation enhanced by knowledge masking strategies, which includes entity-level masking andphrase-level masking. Entity-level strategymasks entities which are usually composed ofmultiple words. Phrase-level strategy masksthe whole phrase which is composed of severalwords standing together as a conceptual unit.Experimental results show that ERNIE outperforms other baseline methods, achieving newstate-of-the-art results on five Chinese natural language processing tasks including natural language inference, semantic similarity,named entity recognition, sentiment analysisand question answering. We also demonstratethat ERNIE has more powerful knowledge inference capacity on a cloze test.</div></details></td>
        <td>XNLI / dev: 79.9</td>
4058 4059
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4060 4061
    </tr>
    <tr>
4062
        <td>17</td>
4063
        <td>ERNIE-DOC</td>
4064
        <td><a href="https://paperswithcode.com/paper/ernie-doc-the-retrospective-long-document">ERNIE-Doc: A Retrospective Long-Document Modeling Transformer</a></td>
4065 4066
        <td><details><summary>Abstract</summary><div>Transformers are not suited for processing long documents, due to their quadratically increasing memory and time consumption. Simply truncating a long document or applying the sparse attention mechanism will incur the context fragmentation problem or lead to an inferior modeling capability against comparable model sizes. In this paper, we propose ERNIE-Doc, a document-level language pretraining model based on Recurrence Transformers. Two well-designed techniques, namely the retrospective feed mechanism and the enhanced recurrence mechanism, enable ERNIE-Doc, which has a much longer effective context length, to capture the contextual information of a complete document. We pretrain ERNIE-Doc to explicitly learn the relationships among segments with an additional document-aware segment-reordering objective. Various experiments were conducted on both English and Chinese document-level tasks. ERNIE-Doc improved the state-of-the-art language modeling result of perplexity to 16.8 on WikiText-103. Moreover, it outperformed competitive pretraining models by a large margin on most language understanding tasks, such as text classification and question answering.</div></details></td>
        <td>IMDB / ERNIE-DOC-Larg</br>e / acc: 97.1</td>
4067 4068
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4069 4070
    </tr>
    <tr>
4071
        <td>18</td>
4072
        <td>ERNIE-GEN</td>
4073
        <td><a href="https://paperswithcode.com/paper/ernie-gen-an-enhanced-multi-flow-pre-training">ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation</a></td>
4074 4075
        <td><details><summary>Abstract</summary><div>Current pre-training works in natural language generation pay little attention to the problem of exposure bias on downstream tasks. To address this issue, we propose an enhanced multi-flow sequence to sequence pre-training and fine-tuning framework named ERNIE-GEN, which bridges the discrepancy between training and inference with an infilling generation mechanism and a noise-aware generation method. To make generation closer to human writing patterns, this framework introduces a span-by-span generation flow that trains the model to predict semantically-complete spans consecutively rather than predicting word by word. Unlike existing pre-training methods, ERNIE-GEN incorporates multi-granularity target sampling to construct pre-training data, which enhances the correlation between encoder and decoder. Experimental results demonstrate that ERNIE-GEN achieves state-of-the-art results with a much smaller amount of pre-training data and parameters on a range of language generation tasks, including abstractive summarization (Gigaword and CNN/DailyMail), question generation (SQuAD), dialogue generation (Persona-Chat) and generative question answering (CoQA).</div></details></td>
        <td> 10k training samples</br> : Gigaword 10k/ERNIE-GEN LARGE// RG-L: 32.50</td>
4076 4077
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4078 4079
    </tr>
    <tr>
4080
        <td>19</td>
4081
        <td>ERNIE-GRAM</td>
4082
        <td><a href="https://paperswithcode.com/paper/ernie-gram-pre-training-with-explicitly-n">ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling for Natural Language Understanding</a></td>
4083 4084
        <td><details><summary>Abstract</summary><div>Coarse-grained linguistic information, such as named entities or phrases, facilitates adequately representation learning in pre-training. Previous works mainly focus on extending the objective of BERT's Masked Language Modeling (MLM) from masking individual tokens to contiguous sequences of n tokens. We argue that such contiguously masking method neglects to model the intra-dependencies and inter-relation of coarse-grained linguistic information. As an alternative, we propose ERNIE-Gram, an explicitly n-gram masking method to enhance the integration of coarse-grained information into pre-training. In ERNIE-Gram, n-grams are masked and predicted directly using explicit n-gram identities rather than contiguous sequences of n tokens. Furthermore, ERNIE-Gram employs a generator model to sample plausible n-gram identities as optional n-gram masks and predict them in both coarse-grained and fine-grained manners to enable comprehensive n-gram prediction and relation modeling. We pre-train ERNIE-Gram on English and Chinese text corpora and fine-tune on 19 downstream tasks. Experimental results show that ERNIE-Gram outperforms previous pre-training models like XLNet and RoBERTa by a large margin, and achieves comparable results with state-of-the-art methods. The source codes and pre-trained models have been released at this https URL.</div></details></td>
        <td>MNLI / 89.1</td>
4085 4086
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4087 4088
    </tr>
    <tr>
4089
        <td>20</td>
4090
        <td>RoFormer</td>
4091
        <td><a href="https://paperswithcode.com/paper/roformer-enhanced-transformer-with-rotary">RoFormer: Enhanced Transformer with Rotary Position Embedding</a></td>
4092 4093
        <td><details><summary>Abstract</summary><div>Position encoding in transformer architecture provides supervision for dependency modeling between elements at different positions in the sequence. We investigate various methods to encode positional information in transformer-based language models and propose a novel implementation named Rotary Position Embedding(RoPE). The proposed RoPE encodes absolute positional information with rotation matrix and naturally incorporates explicit relative position dependency in self-attention formulation. Notably, RoPE comes with valuable properties such as flexibility of being expand to any sequence lengths, decaying inter-token dependency with increasing relative distances, and capability of equipping the linear self-attention with relative position encoding. As a result, the enhanced transformer with rotary position embedding, or RoFormer, achieves superior performance in tasks with long texts. We release the theoretical analysis along with some preliminary experiment results on Chinese data. The undergoing experiment for English benchmark will soon be updated.</div></details></td>
        <td>THUCNews / dev: 98</td>
4094 4095
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4096 4097
    </tr>
    <tr>
4098
        <td>21</td>
4099
        <td>BART</td>
4100
        <td><a href="https://paperswithcode.com/paper/bart-denoising-sequence-to-sequence-pre">BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension</a></td>
4101 4102
        <td><details><summary>Abstract</summary><div>We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also report ablation experiments that replicate other pretraining schemes within the BART framework, to better measure which factors most influence end-task performance.</div></details></td>
        <td>CNN/DailyMail / bart-</br>base / Rouge-L: 41.0132</td>
4103 4104
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4105 4106
    </tr>
    <tr>
4107
        <td>22</td>
4108
        <td>ALBERT</td>
4109
        <td><a href="https://paperswithcode.com/paper/albert-a-lite-bert-for-self-supervised">ALBERT: A Lite BERT for Self-supervised Learning of Language Representations</a></td>
4110 4111
        <td><details><summary>Abstract</summary><div>Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations and longer training times. To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT. Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and \squad benchmarks while having fewer parameters compared to BERT-large. The code and the pretrained models are available at this https URL.</div></details></td>
        <td>MNLI / xxlarge / 88.0</td>
4112 4113
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4114 4115
    </tr>
    <tr>
4116 4117 4118
        <td>23</td>
        <td>BERT-Base</td>
        <td><a href="https://paperswithcode.com/paper/bert-pre-training-of-deep-bidirectional">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</a></td>
4119 4120
        <td><details><summary>Abstract</summary><div>We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.</div></details></td>
        <td>MNLI-(m/mm) / 86.7/85</br>.9</td>
4121 4122
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4123 4124
    </tr>
    <tr>
4125
        <td>24</td>
4126
        <td>BigBird</td>
4127
        <td><a href="https://paperswithcode.com/paper/big-bird-transformers-for-longer-sequences">Big Bird: Transformers for Longer Sequences</a></td>
4128 4129
        <td><details><summary>Abstract</summary><div>BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).</div></details></td>
        <td>HotpotQA / Ans: 75.5,</br> Sup: 87.1, Joint: 67.8</td>
4130 4131
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4132 4133
    </tr>
    <tr>
4134
        <td>25</td>
4135
        <td>DistilBert</td>
4136
        <td><a href="https://paperswithcode.com/method/distillbert">DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter</a></td>
4137 4138
        <td><details><summary>Abstract</summary><div>As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-theedge and/or under constrained computational training or inference budgets remains challenging. In this work, we propose a method to pre-train a smaller generalpurpose language representation model, called DistilBERT, which can then be finetuned with good performances on a wide range of tasks like its larger counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage knowledge distillation during the pre-training phase and show that it is possible to reduce the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive biases learned by larger models during pre-training, we introduce a triple loss combining language modeling, distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device study</div></details></td>
        <td>SST-2 / dev: 91.4</td>
4139 4140
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4141 4142
    </tr>
    <tr>
4143
        <td>26</td>
4144
        <td>ELECTRA</td>
4145
        <td><a href="https://paperswithcode.com/paper/electra-pre-training-text-encoders-as-1">ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators</a></td>
4146 4147
        <td><details><summary>Abstract</summary><div>Masked language modeling (MLM) pre-training methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to reconstruct the original tokens. While they produce good results when transferred to downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a more sample-efficient pre-training task called replaced token detection. Instead of masking the input, our approach corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments demonstrate this new pre-training task is more efficient than MLM because the task is defined over all input tokens rather than just the small subset that was masked out. As a result, the contextual representations learned by our approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale, where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when using the same amount of compute. </div></details></td>
        <td>MNLI / ELECTRA-1.75M </br>/ 90.9</td>
4148 4149
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4150 4151
    </tr>
    <tr>
4152
        <td>27</td>
4153
        <td>GPT</td>
4154
        <td><a href="https://paperswithcode.com/method/gpt-2">Language Models are Unsupervised Multitask Learners</a></td>
4155 4156
        <td><details><summary>Abstract</summary><div>Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset - matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.</div></details></td>
        <td>SST-2 / acc: 94.495</td>
4157 4158
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4159 4160
    </tr>
    <tr>
4161
        <td>28</td>
4162
        <td>NeZha</td>
4163
        <td><a href="https://paperswithcode.com/paper/nezha-neural-contextualized-representation">NEZHA: Neural Contextualized Representation for Chinese Language Understanding</a></td>
4164 4165
        <td><details><summary>Abstract</summary><div>The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora. In this technical report, we present our practice of pre-training language models named NEZHA (NEural contextualiZed representation for CHinese lAnguage understanding) on Chinese corpora and finetuning for the Chinese NLU tasks. The current version of NEZHA is based on BERT with a collection of proven improvements, which include Functional Relative Positional Encoding as an effective positional encoding scheme, Whole Word Masking strategy, Mixed Precision Training and the LAMB Optimizer in training the models. The experimental results show that NEZHA achieves the state-of-the-art performances when finetuned on several representative Chinese tasks, including named entity recognition (People's Daily NER), sentence matching (LCQMC), Chinese sentiment classification (ChnSenti) and natural language inference (XNLI).</div></details></td>
        <td>XNLI / NEZHA-Large-WW</br>M / dev:  82.21</td>
4166 4167
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4168 4169
    </tr>
    <tr>
4170
        <td>29</td>
4171
        <td>RoBERTa</td>
4172
        <td><a href="https://paperswithcode.com/method/roberta">RoBERTa: A Robustly Optimized BERT Pretraining Approach</a></td>
4173 4174
        <td><details><summary>Abstract</summary><div>Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.</div></details></td>
        <td>MNLI / dev: 90.2/90.2</td>
4175 4176
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4177 4178
    </tr>
    <tr>
4179
        <td>30</td>
4180
        <td>MiniLMv2</td>
4181
        <td><a href="https://paperswithcode.com/search?q_meta=&q_type=&q=MiniLMv2">MINILMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers</a></td>
4182 4183
        <td><details><summary>Abstract</summary><div>We generalize deep self-attention distillation in MINILM (Wang et al., 2020) by only using self-attention relation distillation for taskagnostic compression of pretrained Transformers. In particular, we define multi-head selfattention relations as scaled dot-product between the pairs of query, key, and value vectors within each self-attention module. Then we employ the above relational knowledge to train the student model. Besides its simplicity and unified principle, more favorably, there is no restriction in terms of the number of student’s attention heads, while most previous work has to guarantee the same head number between teacher and student. Moreover, the fine-grained self-attention relations tend to fully exploit the interaction knowledge learned by Transformer. In addition, we thoroughly examine the layer selection strategy for teacher models, rather than just relying on the last layer as in MINILM. We conduct extensive experiments on compressing both monolingual and multilingual pretrained models. Experimental results demonstrate that our models1 distilled from base-size and large-size teachers (BERT, RoBERTa and XLM-R) outperform the state-of-the-art.</div></details></td>
        <td>AFQMC / dev: 71.38</td>
4184 4185
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4186 4187
    </tr>
    <tr>
4188
        <td>31</td>
4189
        <td>TinyBert</td>
4190
        <td><a href="https://paperswithcode.com/paper/190910351">TinyBERT: Distilling BERT for Natural Language Understanding</a></td>
4191 4192
        <td><details><summary>Abstract</summary><div>Language model pre-training, such as BERT, has significantly improved the performances of many natural language processing tasks. However, pre-trained language models are usually computationally expensive, so it is difficult to efficiently execute them on resourcerestricted devices. To accelerate inference and reduce model size while maintaining accuracy, we first propose a novel Transformer distillation method that is specially designed for knowledge distillation (KD) of theTransformer-based models. By leveraging this new KD method, the plenty of knowledge encoded in a large “teacher” BERT can be effectively transferred to a small “student” TinyBERT. Then, we introduce a new two-stage learning framework for TinyBERT, which performs Transformer distillation at both the pretraining and task-specific learning stages. This framework ensures that TinyBERT can capture the general-domain as well as the task-specificknowledge in BERT. TinyBERT4 1 with 4 layers is empirically effective and achieves more than 96.8% the performance of its teacher BERTBASE on GLUEbenchmark, while being 7.5x smaller and 9.4x faster on inference. TinyBERT4 is also significantly better than 4-layer state-of-the-art baselines on BERT distillation, with only ∼28% parameters and ∼31% inference time of them. Moreover, TinyBERT6 with 6 layers performs on-par with its teacher BERTBASE.</div></details></td>
        <td>SST-2 / dev: 93.00</td>
4193 4194
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
4195 4196
    </tr>
    <tr>
4197
        <td>32</td>
4198
        <td>XLNet</td>
4199
        <td><a href="https://paperswithcode.com/paper/xlnet-generalized-autoregressive-pretraining">XLNet: Generalized Autoregressive Pretraining for Language Understanding</a></td>
4200 4201
        <td><details><summary>Abstract</summary><div>With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining ap- proaches based on autoregressive language modeling. However, relying on corrupt- ing the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.</div></details></td>
        <td>SST-2 / dev: 94.3</td>
4202 4203 4204 4205 4206 4207 4208 4209 4210 4211 4212 4213 4214 4215 4216 4217 4218 4219 4220 4221 4222 4223 4224 4225 4226 4227 4228 4229 4230 4231 4232 4233 4234 4235 4236 4237 4238 4239 4240 4241 4242 4243 4244 4245 4246 4247 4248 4249 4250 4251 4252 4253 4254 4255 4256 4257 4258 4259 4260 4261 4262 4263 4264 4265 4266 4267 4268 4269 4270 4271 4272 4273 4274 4275 4276 4277 4278 4279 4280 4281 4282 4283 4284 4285 4286 4287 4288 4289 4290 4291 4292 4293 4294 4295 4296 4297 4298 4299 4300 4301 4302 4303 4304 4305 4306 4307 4308 4309 4310 4311 4312 4313 4314 4315 4316 4317 4318 4319 4320 4321 4322 4323 4324 4325 4326 4327 4328 4329 4330 4331 4332 4333 4334 4335 4336 4337 4338 4339 4340 4341 4342 4343 4344 4345 4346 4347 4348 4349 4350 4351 4352 4353 4354 4355 4356 4357 4358 4359 4360 4361 4362 4363 4364 4365 4366 4367 4368 4369 4370 4371 4372 4373 4374 4375 4376 4377 4378 4379 4380 4381 4382 4383 4384 4385 4386 4387 4388 4389 4390 4391 4392 4393 4394 4395 4396 4397 4398 4399 4400 4401 4402 4403 4404 4405 4406 4407 4408 4409 4410 4411 4412 4413 4414 4415 4416 4417 4418 4419 4420 4421 4422 4423 4424 4425 4426 4427 4428 4429 4430 4431 4432 4433 4434 4435 4436 4437 4438 4439 4440 4441 4442 4443 4444 4445 4446
        <td><a href="无">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>33</td>
        <td>ERNIE-M</td>
        <td><a href="https://paperswithcode.com/paper/ernie-m-enhanced-multilingual-representation">ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora</a></td>
        <td><details><summary>Abstract</summary><div>Recent studies have demonstrated that pretrained cross-lingual models achieve impressive performance in downstream cross-lingual tasks. This improvement benefits from learning a large amount of monolingual and parallel corpora. Although it is generally acknowledged that parallel corpora are critical for improving the model performance, existing methods are often constrained by the size of parallel corpora, especially for lowresource languages. In this paper, we propose ERNIE-M, a new training method that encourages the model to align the representation of multiple languages with monolingualcorpora, to overcome the constraint that the parallel corpus size places on the model performance. Our key insight is to integrate back-translation into the pre-training process. We generate pseudo-parallel sentence pairs ona monolingual corpus to enable the learning of semantic alignments between different languages, thereby enhancing the semantic modeling of cross-lingual models. Experimental results show that ERNIE-M outperforms existing cross-lingual models and delivers new state-of-the-art results in various cross-lingual downstream tasks.</div></details></td>
        <td>XNLI</td>
        <td><a href="">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>34</td>
        <td>FNet</td>
        <td><a href="https://paperswithcode.com/paper/fnet-mixing-tokens-with-fourier-transforms">FNet: Mixing Tokens with Fourier Transforms</a></td>
        <td><details><summary>Abstract</summary><div>We show that Transformer encoder architectures can be sped up, with limited accuracy costs, by replacing the self-attention sublayers with simple linear transformations that “mix” input tokens. These linear mixers, along with standardnonlinearities in feed-forward layers, prove competent at modeling semantic relationships in several text classification tasks. Most surprisingly, we find that replacing the self-attention sublayer in a Transformer encoder with a standard, unparameterized Fourier Transform achieves 92-97% of the accuracy of BERT counterparts on the GLUE benchmark, but trains 80% faster on GPUs and 70% faster on TPUs at standard 512 input lengths. At longer input lengths, our FNet model is significantly faster: when compared to the “efficient” Transformers on the Long Range Arena benchmark, FNet matches the accuracy of the most accurate models, while outpacing the fastest models across all sequence lengths on GPUs (and across relatively shorter lengths on TPUs). Finally, FNet has a light memory footprint and is particularly efficient at smaller model sizes; for a fixed speed and accuracy budget, small FNet models outperform Transformer counterparts.</div></details></td>
        <td>GLUE</td>
        <td><a href="">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>35</td>
        <td>LUKE</td>
        <td><a href="https://paperswithcode.com/paper/luke-deep-contextualized-entity">LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention</a></td>
        <td><details><summary>Abstract</summary><div>Entity representations are useful in natural language tasks involving entities. In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer (Vaswani et al., 2017). The proposed model treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. Our model is trained using a new pretraining task based on the masked language model of BERT (Devlin et al., 2019). The task involves predicting randomly masked words and entities in a large entity-annotated corpus retrieved from Wikipedia. We also propose an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores. The proposed model achieves impressive empirical performance on a wide range of entity-related tasks. In particular, it obtains state-of-the-art results on five well-known datasets: Open Entity (entity typing), TACRED (relation classification), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), and SQuAD 1.1 (extractive question answering). Our source code and pretrained representations are available at https://github.com/studio-ousia/luke.</div></details></td>
        <td>SQuAD 1.1</td>
        <td><a href="">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>36</td>
        <td>ProphetNet</td>
        <td><a href="https://paperswithcode.com/paper/prophetnet-predicting-future-n-gram-for">ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training</a></td>
        <td><details><summary>Abstract</summary><div>This paper presents a new sequence-tosequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of optimizing one-stepahead prediction in the traditional sequenceto-sequence model, the ProphetNet is optimized by n-step ahead prediction that predicts the next n tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large-scale dataset (160GB), respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new state-of-the-art results on all these datasets compared to the models using the same scale pre-training corpus.</div></details></td>
        <td>SQuAD 1.1</td>
        <td><a href="">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>37</td>
        <td>Rembert</td>
        <td><a href="https://paperswithcode.com/paper/rethinking-embedding-coupling-in-pre-trained-1">Rethinking Embedding Coupling in Pre-trained Language Models</a></td>
        <td><details><summary>Abstract</summary><div>We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that allocating additional capacity to the output embedding provides benefits to the model that persist through the fine-tuning stage eventhough the output embedding is discarded after pre-training. Our analysis shows that larger output embeddings prevent the model’s last layers from overspecializing to the pre-training task and encourage Transformer representations to be moregeneral and more transferable to other tasks and languages. Harnessing these findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the number of parameters at the fine-tuning stage.</div></details></td>
        <td>XTREME</td>
        <td><a href="">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>38</td>
        <td>UIE</td>
        <td><a href="https://paperswithcode.com/paper/unified-structure-generation-for-universal">Unified Structure Generation for Universal Information Extraction</a></td>
        <td><details><summary>Abstract</summary><div>Information extraction suffers from its varying targets, heterogeneous structures, and demand-specific schemas. In this paper, we propose a unified text-to-structure generation framework, namely UIE, which can universally model different IE tasks, adaptively generate targeted structures, and collaboratively learn general IE abilities from different knowledge sources. Specifically, UIE uniformly encodes different extraction structures via a structured extraction language, adaptively generates target extractions via a schema-based prompt mechanism - structural schema instructor, and captures the common IE abilities via a large-scale pre-trained text-to-structure model. Experiments show that UIE achieved the state-of-the-art performance on 4 IE tasks, 13 datasets, and on all supervised, low-resource, and few-shot settings for a wide range of entity, relation, event and sentiment extraction tasks and their unification. These results verified the effectiveness, universality, and transferability of UIE.</div></details></td>
        <td>F1</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>39</td>
        <td>Blenderbot</td>
        <td><a href="https://paperswithcode.com/paper/blenderbot-3-a-deployed-conversational-agent">BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage</a></td>
        <td><details><summary>Abstract</summary><div>We present BlenderBot 3, a 175B parameter dialogue model capable of open-domain conversation with access to the internet and a long-term memory, and having been trained on a large number of user defined tasks. We release both the model weights and code, and have also deployed the model on a public web page to interact with organic users. This technical report describes how the model was built (architecture, model and training scheme), and details of its deployment, including safety mechanisms. Human evaluations show its superiority to existing open-domain dialogue agents, including its predecessors (Roller et al., 2021; Komeili et al., 2022). Finally, we detail our plan for continual learning using the data collected from deployment, which will also be publicly released. The goal of this research program is thus to enable the community to study ever-improving responsible agents that learn through interaction.</div></details></td>
        <td>F1</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleNLP">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>40</td>
        <td>BlenderbotSmall</td>
        <td><a href="https://paperswithcode.com/paper/blenderbot-3-a-deployed-conversational-agent">BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage</a></td>
        <td><details><summary>Abstract</summary><div>We present BlenderBot 3, a 175B parameter dialogue model capable of open-domain conversation with access to the internet and a long-term memory, and having been trained on a large number of user defined tasks. We release both the model weights and code, and have also deployed the model on a public web page to interact with organic users. This technical report describes how the model was built (architecture, model and training scheme), and details of its deployment, including safety mechanisms. Human evaluations show its superiority to existing open-domain dialogue agents, including its predecessors (Roller et al., 2021; Komeili et al., 2022). Finally, we detail our plan for continual learning using the data collected from deployment, which will also be publicly released. The goal of this research program is thus to enable the community to study ever-improving responsible agents that learn through interaction.</div></details></td>
        <td>F1</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleNLP">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>41</td>
        <td>ChineseBert</td>
        <td><a href="https://paperswithcode.com/paper/chinesebert-chinese-pretraining-enhanced-by">ChineseBERT: Chinese Pretraining Enhanced byGlyph and Pinyin Information</a></td>
        <td><details><summary>Abstract</summary><div>Recent pretraining models in Chinese neglect two important aspects specific to the Chinese language: glyph and pinyin, which carry significant syntax and semantic information for language understanding. In this work, we propose ChineseBERT, which incorporates both the {\it glyph} and {\it pinyin} information of Chinese characters into language model pretraining. The glyph embedding is obtained based on different fonts of a Chinese character, being able to capture character semantics from the visual features, and the pinyin embedding characterizes the pronunciation of Chinese characters, which handles the highly prevalent heteronym phenomenon in Chinese (the same character has different pronunciations with different meanings). Pretrained on large-scale unlabeled Chinese corpus, the proposed ChineseBERT model yields significant performance boost over baseline models with fewer training steps. The porpsoed model achieves new SOTA performances on a wide range of Chinese NLP tasks, including machine reading comprehension, natural language inference, text classification, sentence pair matching, and competitive performances in named entity recognition. Code and pretrained models are publicly available at https://github.com/ShannonAI/ChineseBert.</div></details></td>
        <td>F1</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>42</td>
        <td>CodeGen</td>
        <td><a href="https://paperswithcode.com/paper/a-conversational-paradigm-for-program">CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis</a></td>
        <td><details><summary>Abstract</summary><div>Program synthesis strives to generate a computer program as a solution to a given problem specification, expressed with input-output examples or natural language descriptions. The prevalence of large language models advances the state-of-the-art for program synthesis, though limited training resources and data impede open access to such models. To democratize this, we train and release a family of large language models up to 16.1B parameters, called CODEGEN, on natural language and programming language data, and open source the training library JAXFORMER. We show the utility of the trained model by demonstrating that it is competitive with the previous state-of-the-art on zero-shot Python code generation on HumanEval. We further investigate the multi-step paradigm for program synthesis, where a single program is factorized into multiple prompts specifying subproblems. To this end, we construct an open benchmark, Multi-Turn Programming Benchmark (MTPB), consisting of 115 diverse problem sets that are factorized into multi-turn prompts. Our analysis on MTPB shows that the same intent provided to CODEGEN in multi-turn fashion significantly improves program synthesis over that provided as a single turn. We make the training library JAXFORMER and model checkpoints available as open source contribution: https://github.com/salesforce/CodeGen.</div></details></td>
        <td>F1</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>43</td>
        <td>ConvBert</td>
        <td><a href="https://paperswithcode.com/method/convbert">ConvBERT: Improving BERT with Span-based Dynamic Convolution</a></td>
        <td><details><summary>Abstract</summary><div>ConvBERT is a modification on the BERT architecture which uses a span-based dynamic convolution to replace self-attention heads to directly model local dependencies. Specifically a new mixed attention module replaces the self-attention modules in BERT, which leverages the advantages of convolution to better capture local dependency. Additionally, a new span-based dynamic convolution operation is used to utilize multiple input tokens to dynamically generate the convolution kernel. Lastly, ConvBERT also incorporates some new model designs including the bottleneck attention and grouped linear operator for the feed-forward module (reducing the number of parameters).</div></details></td>
        <td>F1</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>44</td>
        <td>CTRL</td>
        <td><a href="https://paperswithcode.com/method/ctrl"> CTRL: A Conditional Transformer Language Model for Controllable Generation</a></td>
        <td><details><summary>Abstract</summary><div>CTRL is conditional transformer language model, trained to condition on control codes that govern style, content, and task-specific behavior. Control codes were derived from structure that naturally co-occurs with raw text, preserving the advantages of unsupervised learning while providing more explicit control over text generation. These codes also allow CTRL to predict which parts of the training data are most likely given a sequence</div></details></td>
        <td>PPL</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>45</td>
        <td>DALL-E</td>
        <td><a href="https://paperswithcode.com/paper/zero-shot-text-to-image-generation">Zero-Shot Text-to-Image Generation</a></td>
        <td><details><summary>Abstract</summary><div>Text-to-image generation has traditionally focused on finding better modeling assumptions for training on a fixed dataset. These assumptions might involve complex architectures, auxiliary losses, or side information such as object part labels or segmentation masks supplied during training. We describe a simple approach for this task based on a transformer that autoregressively models the text and image tokens as a single stream of data. With sufficient data and scale, our approach is competitive with previous domain-specific models when evaluated in a zero-shot fashion.</div></details></td>
        <td>FID</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleNLP">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>46</td>
        <td>Ernie-Layout</td>
        <td><a href="https://paperswithcode.com/paper/ernie-layout-layout-knowledge-enhanced-pre">ERNIE-Layout: Layout Knowledge Enhanced Pre-training for Visually-rich Document Understanding</a></td>
        <td><details><summary>Abstract</summary><div></div></details></td>
        <td>F1</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>47</td>
        <td>Ernie-Vil</td>
        <td><a href="https://paperswithcode.com/paper/ernie-vil-knowledge-enhanced-vision-language">ERNIE-ViL: Knowledge Enhanced Vision-Language Representations Through Scene Graph</a></td>
        <td><details><summary>Abstract</summary><div>We propose a knowledge-enhanced approach, ERNIE-ViL, which incorporates structured knowledge obtained from scene graphs to learn joint representations of vision-language. ERNIE-ViL tries to build the detailed semantic connections (objects, attributes of objects and relationships between objects) across vision and language, which are essential to vision-language cross-modal tasks. Utilizing scene graphs of visual scenes, ERNIE-ViL constructs Scene Graph Prediction tasks, i.e., Object Prediction, Attribute Prediction and Relationship Prediction tasks in the pre-training phase. Specifically, these prediction tasks are implemented by predicting nodes of different types in the scene graph parsed from the sentence. Thus, ERNIE-ViL can learn the joint representations characterizing the alignments of the detailed semantics across vision and language. After pre-training on large scale image-text aligned datasets, we validate the effectiveness of ERNIE-ViL on 5 cross-modal downstream tasks. ERNIE-ViL achieves state-of-the-art performances on all these tasks and ranks the first place on the VCR leaderboard with an absolute improvement of 3.7%.</div></details></td>
        <td>Recall</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleNLP">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>48</td>
        <td>Funnel-Transformer</td>
        <td><a href="https://paperswithcode.com/paper/funnel-transformer-filtering-out-sequential">Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing</a></td>
        <td><details><summary>Abstract</summary><div>With the success of language pretraining, it is highly desirable to develop more efficient architectures of good scalability that can exploit the abundant unlabeled data at a lower cost. To improve the efficiency, we examine the much-overlooked redundancy in maintaining a full-length token-level presentation, especially for tasks that only require a single-vector presentation of the sequence. With this intuition, we propose Funnel-Transformer which gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost. More importantly, by re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, we further improve the model capacity. In addition, to perform token-level predictions as required by common pretraining objectives, Funnel-Transformer is able to recover a deep representation for each token from the reduced hidden sequence via a decoder. Empirically, with comparable or fewer FLOPs, Funnel-Transformer outperforms the standard Transformer on a wide variety of sequence-level prediction tasks, including text classification, language understanding, and reading comprehension. The code and pretrained checkpoints are available at https://github.com/laiguokun/Funnel-Transformer.</div></details></td>
        <td>F1</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleNLP">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>49</td>
        <td>GAU-alpha</td>
        <td><a href="https://arxiv.org/abs/2202.10447 ">Transformer Quality in Linear Time</a></td>
        <td><details><summary>Abstract</summary><div>We revisit the design choices in Transformers, and propose methods to address their weaknesses in handling long sequences. First, we propose a simple layer named gated attention unit, which allows the use of a weaker single-head attention with minimal quality loss. We then propose a linear approximation method complementary to this new layer, which is accelerator-friendly and highly competitive in quality. The resulting model, named FLASH, matches the perplexity of improved Transformers over both short (512) and long (8K) context lengths, achieving training speedups of up to 4.9× on Wiki-40B and 12.1× on PG-19 for auto-regressive language modeling, and 4.8× on C4 for masked language modeling.</div></details></td>
        <td>PPLX</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleNLP">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>50</td>
        <td>LayoutLM</td>
        <td><a href="https://paperswithcode.com/paper/layoutlm-pre-training-of-text-and-layout-for">LayoutLM: Pre-training of Text and Layout for Document Image Understanding</a></td>
        <td><details><summary>Abstract</summary><div>Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the widespread use of pre-training models for NLP applications, they almost exclusively focus on text-level manipulation, while neglecting layout and style information that is vital for document image understanding. In this paper, we propose the \textbf{LayoutLM} to jointly model interactions between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents. Furthermore, we also leverage image features to incorporate words' visual information into LayoutLM. To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for document-level pre-training. It achieves new state-of-the-art results in several downstream tasks, including form understanding (from 70.72 to 79.27), receipt understanding (from 94.02 to 95.24) and document image classification (from 93.07 to 94.42). The code and pre-trained LayoutLM models are publicly available at \url{https://aka.ms/layoutlm}.</div></details></td>
        <td>F1</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>51</td>
        <td>LayoutLMv2</td>
        <td><a href="https://paperswithcode.com/paper/layoutlmv2-multi-modal-pre-training-for">LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding</a></td>
        <td><details><summary>Abstract</summary><div>Pre-training of text and layout has proved effective in a variety of visually-rich document understanding tasks due to its effective model architecture and the advantage of large-scale unlabeled scanned/digital-born documents. We propose LayoutLMv2 architecture with new pre-training tasks to model the interaction among text, layout, and image in a single multi-modal framework. Specifically, with a two-stream multi-modal Transformer encoder, LayoutLMv2 uses not only the existing masked visual-language modeling task but also the new text-image alignment and text-image matching tasks, which make it better capture the cross-modality interaction in the pre-training stage. Meanwhile, it also integrates a spatial-aware self-attention mechanism into the Transformer architecture so that the model can fully understand the relative positional relationship among different text blocks. Experiment results show that LayoutLMv2 outperforms LayoutLM by a large margin and achieves new state-of-the-art results on a wide variety of downstream visually-rich document understanding tasks, including FUNSD (0.7895  0.8420), CORD (0.9493  0.9601), SROIE (0.9524  0.9781), Kleister-NDA (0.8340  0.8520), RVL-CDIP (0.9443  0.9564), and DocVQA (0.7295  0.8672). We made our model and code publicly available at \url{https://aka.ms/layoutlmv2}.</div></details></td>
        <td>F1</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleNLP">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>52</td>
        <td>mBART</td>
        <td><a href="https://paperswithcode.com/method/mbart"> Multilingual Denoising Pre-training for Neural Machine Translation</a></td>
        <td><details><summary>Abstract</summary><div>mBART is a sequence-to-sequence denoising auto-encoder pre-trained on large-scale monolingual corpora in many languages using the BART objective. The input texts are noised by masking phrases and permuting sentences, and a single Transformer model is learned to recover the texts. Different from other pre-training approaches for machine translation, mBART pre-trains a complete autoregressive Seq2Seq model. mBART is trained once for all languages, providing a set of parameters that can be fine-tuned for any of the language pairs in both supervised and unsupervised settings, without any task-specific or language-specific modifications or initialization schemes.</div></details></td>
        <td>BELU</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleNLP">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>53</td>
        <td>Megatron-LM</td>
        <td><a href="https://paperswithcode.com/paper/megatron-lm-training-multi-billion-parameter">Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism</a></td>
        <td><details><summary>Abstract</summary><div></div></details></td>
        <td>F1</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>54</td>
        <td>MobileBERT</td>
        <td><a href="https://paperswithcode.com/method/mobilebert"> MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices</a></td>
        <td><details><summary>Abstract</summary><div>MobileBERT is a type of inverted-bottleneck BERT that compresses and accelerates the popular BERT model. MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. To train MobileBERT, we first train a specially designed teacher model, an inverted-bottleneck incorporated BERT_LARGE model. Then, we conduct knowledge transfer from this teacher to MobileBERT. Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to various downstream NLP tasks via simple fine-tuning. It is trained by layer-to-layer imitating the inverted bottleneck BERT.</div></details></td>
        <td>F1</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleNLP">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>55</td>
        <td>MPNet</td>
        <td><a href="https://paperswithcode.com/method/mpnet">MPNet: Masked and Permuted Pre-training for Language Understanding</a></td>
        <td><details><summary>Abstract</summary><div>MPNet is a pre-training method for language models that combines masked language modeling (MLM) and permuted language modeling (PLM) in one view. It takes the dependency among the predicted tokens into consideration through permuted language modeling and thus avoids the issue of BERT. On the other hand, it takes position information of all tokens as input to make the model see the position information of all the tokens and thus alleviates the position discrepancy of XLNet.The training objective of MPNet is: As can be seen, MPNet conditions on  (the tokens preceding the current predicted token ) rather than only the non-predicted tokens  in MLM; comparing with PLM, MPNet takes more information (i.e., the mask symbol  in position ) as inputs. Although the objective seems simple, it is challenging to implement the model efficiently. For details, see the paper.</div></details></td>
        <td>F1</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>56</td>
        <td>NEZHA</td>
        <td><a href="https://paperswithcode.com/paper/nezha-neural-contextualized-representation">NEZHA: Neural Contextualized Representation for Chinese Language Understanding</a></td>
        <td><details><summary>Abstract</summary><div>The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora. In this technical report, we present our practice of pre-training language models named NEZHA (NEural contextualiZed representation for CHinese lAnguage understanding) on Chinese corpora and finetuning for the Chinese NLU tasks. The current version of NEZHA is based on BERT with a collection of proven improvements, which include Functional Relative Positional Encoding as an effective positional encoding scheme, Whole Word Masking strategy, Mixed Precision Training and the LAMB Optimizer in training the models. The experimental results show that NEZHA achieves the state-of-the-art performances when finetuned on several representative Chinese tasks, including named entity recognition (People's Daily NER), sentence matching (LCQMC), Chinese sentiment classification (ChnSenti) and natural language inference (XNLI).</div></details></td>
        <td>F1</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleNLP">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>57</td>
        <td>OPT</td>
        <td><a href="https://paperswithcode.com/paper/opt-open-pre-trained-transformer-language">OPT: Open Pre-trained Transformer Language Models</a></td>
        <td><details><summary>Abstract</summary><div>Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models.</div></details></td>
        <td>PPL</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleNLP">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>58</td>
        <td>PEGASUS</td>
        <td><a href="https://paperswithcode.com/paper/pegasus-pre-training-with-extracted-gap">PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization</a></td>
        <td><details><summary>Abstract</summary><div>Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization. However, pre-training objectives tailored for abstractive text summarization have not been explored. Furthermore there is a lack of systematic evaluation across diverse domains. In this work, we propose pre-training large Transformer-based encoder-decoder models on massive text corpora with a new self-supervised objective. In PEGASUS, important sentences are removed/masked from an input document and are generated together as one output sequence from the remaining sentences, similar to an extractive summary. We evaluated our best PEGASUS model on 12 downstream summarization tasks spanning news, science, stories, instructions, emails, patents, and legislative bills. Experiments demonstrate it achieves state-of-the-art performance on all 12 downstream datasets measured by ROUGE scores. Our model also shows surprising performance on low-resource summarization, surpassing previous state-of-the-art results on 6 datasets with only 1000 examples. Finally we validated our results using human evaluation and show that our model summaries achieve human performance on multiple datasets.</div></details></td>
        <td>    Rouge-1</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>59</td>
        <td>SqueezeBERT</td>
        <td><a href="https://paperswithcode.com/method/squeezebert">SqueezeBERT: What can computer vision teach NLP about efficient neural networks?</a></td>
        <td><details><summary>Abstract</summary><div>SqueezeBERT is an efficient architectural variant of BERT for natural language processing that uses grouped convolutions. It is much like BERT-base, but with positional feedforward connection layers implemented as convolutions, and grouped convolution for many of the layers.</div></details></td>
        <td>F1</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleNLP">快速开始</a></td>
        </td>
C
Chen Long 已提交
4447
    </tr>
4448 4449 4450 4451
</table>

### PaddleSpeech
<table>
C
Chen Long 已提交
4452
    <tr>
4453 4454 4455 4456 4457 4458
        <th>序号</th>
        <th>模型简称</th>
        <th>论文名称(链接)</th>
        <th>摘要</th>
        <th>数据集</th>
        <th width='10%'>快速开始</th>
4459
        </th>
C
Chen Long 已提交
4460 4461
    </tr>
    <tr>
4462 4463
        <td>1</td>
        <td>conformer offline/onl</br>ine</td>
4464
        <td><a href="https://paperswithcode.com/paper/unified-streaming-and-non-streaming-two-pass">Unified Streaming and Non-streaming Two-pass End-to-end Model for Speech Recognition</a></td>
4465 4466
        <td><details><summary>Abstract</summary><div>In this paper, we present a novel two-pass approach to unify streaming and non-streaming end-to-end (E2E) speech recognition in a single model. Our model adopts the hybrid CTC/attention architecture, in which the conformer layers in the encoder are modified. We propose a dynamic chunk-based attention strategy to allow arbitrary right context length. At inference time, the CTC decoder generates n-best hypotheses in a streaming way. The inference latency could be easily controlled by only changing the chunk size. The CTC hypotheses are then rescored by the attention decoder to get the final result. This efficient rescoring process causes very little sentence-level latency. Our experiments on the open 170-hour AISHELL-1 dataset show that, the proposed method can unify the streaming and non-streaming model simply and efficiently. On the AISHELL-1 test set, our unified model achieves 5.60% relative character error rate (CER) reduction in non-streaming ASR compared to a standard non-streaming transformer. The same model achieves 5.42% CER with 640ms latency in a streaming ASR system</div></details></td>
        <td>aishell/ Conformer /c</br>er 0.0547(offline) 0.0594 (online)</td>
4467 4468
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
C
Chen Long 已提交
4469 4470
    </tr>
    <tr>
4471 4472
        <td>2</td>
        <td>transformer offline/o</br>nline</td>
4473
        <td><a href="https://paperswithcode.com/paper/unified-streaming-and-non-streaming-two-pass">Unified Streaming and Non-streaming Two-pass End-to-end Model for Speech Recognition</a></td>
4474 4475
        <td><details><summary>Abstract</summary><div>In this paper, we present a novel two-pass approach to unify streaming and non-streaming end-to-end (E2E) speech recognition in a single model. Our model adopts the hybrid CTC/attention architecture, in which the conformer layers in the encoder are modified. We propose a dynamic chunk-based attention strategy to allow arbitrary right context length. At inference time, the CTC decoder generates n-best hypotheses in a streaming way. The inference latency could be easily controlled by only changing the chunk size. The CTC hypotheses are then rescored by the attention decoder to get the final result. This efficient rescoring process causes very little sentence-level latency. Our experiments on the open 170-hour AISHELL-1 dataset show that, the proposed method can unify the streaming and non-streaming model simply and efficiently. On the AISHELL-1 test set, our unified model achieves 5.60% relative character error rate (CER) reduction in non-streaming ASR compared to a standard non-streaming transformer. The same model achieves 5.42% CER with 640ms latency in a streaming ASR system</div></details></td>
        <td>aishell/Transformer/c</br>er </td>
4476 4477
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
C
Chen Long 已提交
4478 4479
    </tr>
    <tr>
4480 4481
        <td>3</td>
        <td>deepspeech2 offline/o</br>nline</td>
4482
        <td><a href="https://paperswithcode.com/paper/deep-speech-2-end-to-end-speech-recognition">Deep Speech 2: End-to-End Speech Recognition in English and Mandarin</a></td>
4483 4484
        <td><details><summary>Abstract</summary><div>We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech--two vastly different languages. Because it replaces entire pipelines of hand-engineered components with neural networks, end-to-end learning allows us to handle a diverse variety of speech including noisy environments, accents and different languages. Key to our approach is our application of HPC techniques, resulting in a 7x speedup over our previous system. Because of this efficiency, experiments that previously took weeks now run in days. This enables us to iterate more quickly to identify superior architectures and algorithms. As a result, in several cases, our system is competitive with the transcription of human workers when benchmarked on standard datasets. Finally, using a technique called Batch Dispatch with GPUs in the data center, we show that our system can be inexpensively deployed in an online setting, delivering low latency when serving users at scale.</div></details></td>
        <td>aishell/ DeepSpeech2/</br>cer 0.064(offline) 0.080(online)</td>
4485 4486
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
C
Chen Long 已提交
4487 4488
    </tr>
    <tr>
4489 4490
        <td>4</td>
        <td>fastspeech2/fastpitch</td>
4491
        <td><a href="https://paperswithcode.com/paper/fastspeech-2-fast-and-high-quality-end-to-end">FastSpeech 2: Fast and High-Quality End-to-End Text to Speech</a></td>
4492 4493
        <td><details><summary>Abstract</summary><div>Non-autoregressive text to speech (TTS) models such as FastSpeech (Ren et al.,2019) can synthesize speech significantly faster than previous autoregressive models with comparable quality. The training of FastSpeech model relies on an autoregressive teacher model for duration prediction (to provide more informationas input) and knowledge distillation (to simplify the data distribution in output), which can ease the one-to-many mapping problem (i.e., multiple speechvariations correspond to the same text) in TTS. However, FastSpeech has several disadvantages: 1) the teacher-student distillation pipeline is complicated andtime-consuming, 2) the duration extracted from the teacher model is not accurate enough, and the target mel-spectrograms distilled from teacher model suffer from information loss due to data simplification, both of which limit thevoice quality. In this paper, we propose FastSpeech 2, which addresses the issues in FastSpeech and better solves the one-to-many mapping problem in TTSby 1) directly training the model with ground-truth target instead of the simplified output from teacher, and 2) introducing more variation information of speech(e.g., pitch, energy and more accurate duration) as conditional inputs. Specifically, we extract duration, pitch and energy from speech waveform and directlytake them as conditional inputs in training and use predicted values in inference.We further design FastSpeech 2s, which is the first attempt to directly generatespeech waveform from text in parallel, enjoying the benefit of fully end-to-endinference. Experimental results show that 1) FastSpeech 2 achieves a 3x training speed-up over FastSpeech, and FastSpeech 2s enjoys even faster inferencespeed; 2) FastSpeech 2 and 2s outperform FastSpeech in voice quality, and FastSpeech 2 can even surpass autoregressive models. Audio samples are available athttps://speechresearch.github.io/fastspeech2/.</div></details></td>
        <td>CSMSC</td>
4494 4495
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
C
Chen Long 已提交
4496 4497
    </tr>
    <tr>
4498 4499
        <td>5</td>
        <td>speedyspeech</td>
4500
        <td><a href="https://paperswithcode.com/paper/speedyspeech-efficient-neural-speech">SpeedySpeech: Efficient Neural Speech Synthesis</a></td>
4501 4502
        <td><details><summary>Abstract</summary><div>While recent neural sequence-to-sequence models have greatly improved the quality of speech synthesis, there has not been a system capable of fast training, fast inference and high-quality audio synthesis at the same time. We propose a student-teacher network capable of high-quality faster-than-real-time spectrogram synthesis, with low requirements on computational resources and fast training time. We show that self-attention layers are not necessary for generation of high quality audio. We utilize simple convolutional blocks with residual connections in both student and teacher networks and use only a single attention layer in the teacher model. Coupled with a MelGAN vocoder, our model's voice quality was rated significantly higher than Tacotron 2. Our model can be efficiently trained on a single GPU and can run in real time even on a CPU. We provide both our source code and audio samples in our GitHub repository. </div></details></td>
        <td>CSMSC</td>
4503 4504
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
C
Chen Long 已提交
4505 4506
    </tr>
    <tr>
4507 4508
        <td>6</td>
        <td>transformer_tts</td>
4509
        <td><a href="https://paperswithcode.com/paper/neural-speech-synthesis-with-transformer">Neural Speech Synthesis with Transformer Network</a></td>
4510 4511
        <td><details><summary>Abstract</summary><div>Although end-to-end neural text-to-speech (TTS) methods (such as Tacotron2) are proposed and achieve state-of-the- art performance, they still suffer from two problems: 1) low efficiency during training and inference; 2) hard to model long dependency using current recurrent neural networks (RNNs). Inspired by the success of Transformer network in neural machine translation (NMT), in this paper, we intro- duce and adapt the multi-head attention mechanism to replace the RNN structures and also the original attention mecha- nism in Tacotron2. With the help of multi-head self-attention, the hidden states in the encoder and decoder are constructed in parallel, which improves training efficiency. Meanwhile, any two inputs at different times are connected directly by a self-attention mechanism, which solves the long range de- pendency problem effectively. Using phoneme sequences as input, our Transformer TTS network generates mel spec- trograms, followed by a WaveNet vocoder to output the fi- nal audio results. Experiments are conducted to test the ef- ficiency and performance of our new network. For the effi- ciency, our Transformer TTS network can speed up the train- ing about 4.25 times faster compared with Tacotron2. For the performance, rigorous human tests show that our pro- posed model achieves state-of-the-art performance (outper- forms Tacotron2 with a gap of 0.048) and is very close to human quality (4.39 vs 4.44 in MOS).</div></details></td>
        <td>LJSpeech</td>
4512 4513
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
C
Chen Long 已提交
4514 4515
    </tr>
    <tr>
4516 4517
        <td>7</td>
        <td>PP-Waveflow</td>
4518
        <td><a href="https://paperswithcode.com/paper/waveflow-a-compact-flow-based-model-for-raw-1">WaveFlow: A Compact Flow-based Model for Raw Audio</a></td>
4519 4520
        <td><details><summary>Abstract</summary><div>In this work, we propose WaveFlow, a small-footprint generative flow for raw audio, which is directly trained with maximum likelihood. It handles the long-range structure of 1-D waveform with a dilated 2-D convolutional architecture, while modeling the local variations using expressive autoregressive functions. WaveFlow provides a unified view of likelihood-based models for 1-D data, including WaveNet and WaveGlow as special cases. It generates high-fidelity speech as WaveNet, while synthesizing several orders of magnitude faster as it only requires a few sequential steps to generate very long waveforms with hundreds of thousands of time-steps. Furthermore, it can significantly reduce the likelihood gap that has existed between autoregressive models and flow-based models for efficient synthesis. Finally, our small-footprint WaveFlow has only 5.91M parameters, which is 15× smaller than WaveGlow. It can generate 22.05 kHz high-fidelity audio 42.6× faster than real-time (at a rate of 939.3 kHz) on a V100 GPU without engineered inference kernels. </div></details></td>
        <td>LJSpeech</td>
4521 4522
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
C
Chen Long 已提交
4523 4524
    </tr>
    <tr>
4525 4526
        <td>8</td>
        <td>Parallel WaveGAN</td>
4527
        <td><a href="https://paperswithcode.com/paper/parallel-wavegan-a-fast-waveform-generation">PARALLEL WAVEGAN: A FAST WAVEFORM GENERATION MODEL BASED ON GENERATIVE ADVERSARIAL NETWORKS WITH MULTI-RESOLUTION SPECTROGRAM</a></td>
4528 4529
        <td><details><summary>Abstract</summary><div>We propose Parallel WaveGAN, a distillation-free, fast, and small- footprint waveform generation method using a generative adver- sarial network. In the proposed method, a non-autoregressive WaveNet is trained by jointly optimizing multi-resolution spectro- gram and adversarial loss functions, which can effectively capture the time-frequency distribution of the realistic speech waveform. As our method does not require density distillation used in the conventional teacher-student framework, the entire model can be easily trained. Furthermore, our model is able to generate high- fidelity speech even with its compact architecture. In particular, the proposed Parallel WaveGAN has only 1.44 M parameters and can generate 24 kHz speech waveform 28.68 times faster than real- time on a single GPU environment. Perceptual listening test results verify that our proposed method achieves 4.16 mean opinion score within a Transformer-based text-to-speech framework, which is comparative to the best distillation-based Parallel WaveNet sys- tem.</div></details></td>
        <td>CSMSC</td>
4530 4531
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
C
Chen Long 已提交
4532 4533
    </tr>
    <tr>
4534 4535
        <td>9</td>
        <td>MelGAN</td>
4536
        <td><a href="https://paperswithcode.com/method/melgan">MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis</a></td>
4537 4538
        <td><details><summary>Abstract</summary><div>MelGAN is a non-autoregressive feed-forward convolutional architecture to perform audio waveform generation in a GAN setup. The architecture is a fully convolutional feed-forward network with mel-spectrogram  as input and raw waveform  as output. Since the mel-spectrogram is at a 256× lower temporal resolution, the authors use a stack of transposed convolutional layers to upsample the input sequence. Each transposed convolutional layer is followed by a stack of residual blocks with dilated convolutions. Unlike traditional GANs, the MelGAN generator does not use a global noise vector as input.</div></details></td>
        <td>CSMSC</td>
4539 4540
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
C
Chen Long 已提交
4541 4542
    </tr>
    <tr>
4543 4544
        <td>10</td>
        <td>MultiBand MelGAN</td>
4545
        <td><a href="https://paperswithcode.com/method/multi-band-melgan">Multi-band MelGAN: Faster Waveform Generation for High-Quality Text-to-Speech</a></td>
4546 4547
        <td><details><summary>Abstract</summary><div>Multi-band MelGAN, or MB-MelGAN, is a waveform generation model focusing on high-quality text-to-speech. It improves the original MelGAN in several ways. First, it increases the receptive field of the generator, which is proven to be beneficial to speech generation. Second, it substitutes the feature matching loss with the multi-resolution STFT loss to better measure the difference between fake and real speech. Lastly, MelGAN is extended with multi-band processing: the generator takes mel-spectrograms as input and produces sub-band signals which are subsequently summed back to full-band signals as discriminator input.</div></details></td>
        <td>CSMSC</td>
4548 4549
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
C
Chen Long 已提交
4550 4551
    </tr>
    <tr>
4552 4553
        <td>11</td>
        <td>WaveRNN</td>
4554
        <td><a href="https://paperswithcode.com/method/wavernn">Efficient Neural Audio Synthesis</a></td>
4555 4556
        <td><details><summary>Abstract</summary><div>Sequential models achieve state-of-the-art results in audio, visual and textual domains with respect to both estimating the data distribution and generating high-quality samples. Efficient sampling for this class of models has however remained an elusive problem. With a focus on text-to-speech synthesis, we describe a set of general techniques for reducing sampling time while maintaining high output quality. We first describe a single-layer recurrent neural network, the WaveRNN, with a dual softmax layer that matches the quality of the state-of-the-art WaveNet model. The compact form of the network makes it possible to generate 24kHz 16-bit audio 4x faster than real time on a GPU. Second, we apply a weight pruning technique to reduce the number of weights in the WaveRNN. We find that, for a constant number of parameters, large sparse networks perform better than small dense networks and this relationship holds for sparsity levels beyond 96%. The small number of weights in a Sparse WaveRNN makes it possible to sample high-fidelity audio on a mobile CPU in real time. Finally, we propose a new generation scheme based on subscaling that folds a long sequence into a batch of shorter sequences and allows one to generate multiple samples at once. The Subscale WaveRNN produces 16 samples per step without loss of quality and offers an orthogonal method for increasing sampling efficiency.</div></details></td>
        <td>CSMSC</td>
4557 4558
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
C
Chen Long 已提交
4559 4560
    </tr>
    <tr>
4561 4562
        <td>12</td>
        <td>Style MelGAN</td>
4563
        <td><a href="https://paperswithcode.com/paper/stylemelgan-an-efficient-high-fidelity">StyleMelGAN: An Efficient High-Fidelity Adversarial Vocoder with Temporal Adaptive Normalization</a></td>
4564 4565
        <td><details><summary>Abstract</summary><div>In recent years, neural vocoders have surpassed classical speech generation approaches in naturalness and perceptual quality of the synthesized speech. Computationally heavy models like WaveNet and WaveGlow achieve best results, while lightweight GAN models, e.g. MelGAN and Parallel WaveGAN, remain inferior in terms of perceptual quality. We therefore propose StyleMelGAN, a lightweight neural vocoder allowing synthesis of high-fidelity speech with low computational complexity. StyleMelGAN employs temporal adaptive normalization to style a low-dimensional noise vector with the acoustic features of the target speech. For efficient training, multiple random-window discriminators adversarially evaluate the speech signal analyzed by a filter bank, with regularization provided by a multi-scale spectral reconstruction loss. The highly parallelizable speech generation is several times faster than real-time on CPUs and GPUs. MUSHRA and P.800 listening tests show that StyleMelGAN outperforms prior neural vocoders in copy-synthesis and Text-to-Speech scenarios.</div></details></td>
        <td>CSMSC</td>
4566 4567
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
C
Chen Long 已提交
4568 4569
    </tr>
    <tr>
4570 4571
        <td>13</td>
        <td>hifigan</td>
4572
        <td><a href="https://paperswithcode.com/paper/hifi-gan-generative-adversarial-networks-for">HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis</a></td>
4573 4574
        <td><details><summary>Abstract</summary><div>Several recent work on speech synthesis have employed generative adversarial networks (GANs) to produce raw waveforms. Although such methods improve the sampling efficiency and memory usage, their sample quality has not yet reached that of autoregressive and flow-based generative models. In this work, we propose HiFi-GAN, which achieves both efficient and high-fidelity speech synthesis. As speech audio consists of sinusoidal signals with various periods, we demonstrate that modeling periodic patterns of an audio is crucial for enhancing sample quality. A subjective human evaluation (mean opinion score, MOS) of a single speaker dataset indicates that our proposed method demonstrates similarity to human quality while generating 22.05 kHz high-fidelity audio 167.9 times faster than real-time on a single V100 GPU. We further show the generality of HiFi-GAN to the mel-spectrogram inversion of unseen speakers and end-to-end speech synthesis. Finally, a small footprint version of HiFi-GAN generates samples 13.4 times faster than real-time on CPU with comparable quality to an autoregressive counterpart.</div></details></td>
        <td>CSMSC</td>
4575 4576
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
C
Chen Long 已提交
4577 4578
    </tr>
    <tr>
4579 4580
        <td>14</td>
        <td>ecapa-tdnn</td>
4581
        <td><a href="https://paperswithcode.com/paper/ecapa-tdnn-emphasized-channel-attention">ECAPA-TDNN: Emphasized Channel Attention, Propagation and Aggregation in TDNN Based Speaker Verification</a></td>
4582 4583
        <td><details><summary>Abstract</summary><div>Current speaker verification techniques rely on a neural network to extract speaker representations. The successful x-vector architecture is a Time Delay Neural Network (TDNN) that applies statistics pooling to project variable-length utterances into fixed-length speaker characterizing embeddings. In this paper, we propose multiple enhancements to this architecture based on recent trends in the related fields of face verification and computer vision. Firstly, the initial frame layers can be restructured into 1-dimensional Res2Net modules with impactful skip connections. Similarly to SE-ResNet, we introduce Squeeze-and-Excitation blocks in these modules to explicitly model channel interdependencies. The SE block expands the temporal context of the frame layer by rescaling the channels according to global properties of the recording. Secondly, neural networks are known to learn hierarchical features, with each layer operating on a different level of complexity. To leverage this complementary information, we aggregate and propagate features of different hierarchical levels. Finally, we improve the statistics pooling module with channel-dependent frame attention. This enables the network to focus on different subsets of frames during each of the channel's statistics estimation. The proposed ECAPA-TDNN architecture significantly outperforms state-of-the-art TDNN based systems on the VoxCeleb test sets and the 2019 VoxCeleb Speaker Recognition Challenge.</div></details></td>
        <td>VoxCeleb12</td>
4584 4585
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
C
Chen Long 已提交
4586 4587
    </tr>
    <tr>
4588 4589
        <td>15</td>
        <td>MDTC</td>
4590
        <td><a href="https://paperswithcode.com/paper/the-npu-system-for-the-2020-personalized">The NPU System for the 2020 Personalized Voice Trigger Challenge</a></td>
4591 4592
        <td><details><summary>Abstract</summary><div>This paper describes the system developed by the NPU team for the 2020 personalized voice trigger challenge. Our submitted system consists of two independently trained subsystems: a small footprint keyword spotting (KWS) system and a speaker verification (SV) system. For the KWS system, a multi-scale dilated temporal convolutional (MDTC) network is proposed to detect wake-up word (WuW). For SV system, Write something here. The KWS predicts posterior probabilities of whether an audio utterance contains WuW and estimates the location of WuW at the same time. When the posterior probability ofWuW reaches a predefined threshold, the identity information of triggered segment is determined by the SV system. On evaluation dataset, our submitted system obtains detection costs of 0.081and 0.091 in close talking and far-field tasks, respectively.</div></details></td>
        <td>hey_snips</td>
4593 4594
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
C
Chen Long 已提交
4595 4596
    </tr>
    <tr>
4597 4598
        <td>16</td>
        <td>GE2E</td>
4599
        <td><a href="https://paperswithcode.com/paper/generalized-end-to-end-loss-for-speaker">Generalized End-to-End Loss for Speaker Verification</a></td>
4600 4601
        <td><details><summary>Abstract</summary><div>In this paper, we propose a new loss function called generalized end-to-end (GE2E) loss, which makes the training of speaker verification models more efficient than our previous tuple-based end-to-end (TE2E) loss function. Unlike TE2E, the GE2E loss function updates the network in a way that emphasizes examples that are difficult to verify at each step of the training process. Additionally, the GE2E loss does not require an initial stage of example selection. With these properties, our model with the new loss function decreases speaker verification EER by more than 10%, while reducing the training time by 60% at the same time. We also introduce the MultiReader technique, which allows us to do domain adaptation - training a more accurate model that supports multiple keywords (i.e. "OK Google" and "Hey Google") as well as multiple dialects. </div></details></td>
        <td> Librispeech-other-50</br>0</td>
4602 4603
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
C
Chen Long 已提交
4604 4605
    </tr>
    <tr>
4606 4607
        <td>17</td>
        <td>VoiceCloning</td>
4608
        <td><a href="https://paperswithcode.com/paper/transfer-learning-from-speaker-verification">Transfer Learning from Speaker Verification toMultispeaker Text-To-Speech Synthesis</a></td>
4609 4610
        <td><details><summary>Abstract</summary><div>We describe a neural network-based system for text-to-speech (TTS) synthesis thatis able to generate speech audio in the voice of different speakers, including thoseunseen during training. Our system consists of three independently trained components: (1) a speaker encoder network, trained on a speaker verification task using anindependent dataset of noisy speech without transcripts from thousands of speakers,to generate a fixed-dimensional embedding vector from only seconds of referencespeech from a target speaker; (2) a sequence-to-sequence synthesis network basedon Tacotron 2 that generates a mel spectrogram from text, conditioned on thespeaker embedding; (3) an auto-regressive WaveNet-based vocoder network thatconverts the mel spectrogram into time domain waveform samples. We demonstratethat the proposed model is able to transfer the knowledge of speaker variabilitylearned by the discriminatively-trained speaker encoder to the multispeaker TTStask, and is able to synthesize natural speech from speakers unseen during training.We quantify the importance of training the speaker encoder on a large and diversespeaker set in order to obtain the best generalization performance. Finally, we showthat randomly sampled speaker embeddings can be used to synthesize speech inthe voice of novel speakers dissimilar from those used in training, indicating thatthe model has learned a high quality speaker representation.</div></details></td>
        <td>AISHELL-3</td>
4611 4612
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
C
Chen Long 已提交
4613 4614
    </tr>
    <tr>
4615 4616
        <td>18</td>
        <td>tacotron2</td>
4617
        <td><a href="https://paperswithcode.com/paper/neural-speech-synthesis-with-transformer">Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions</a></td>
4618
        <td><details><summary>Abstract</summary><div>This paper describes Tacotron 2, a neural network architecture for speech synthesis directly from text. The system is composed of a recurrent sequence-to-sequence feature prediction network that maps character embeddings to mel-scale spectrograms, followed by a modified WaveNet model acting as a vocoder to synthesize timedomain waveforms from those spectrograms. Our model achieves a mean opinion score (MOS) of 4.53 comparable to a MOS of 4.58 for professionally recorded speech. To validate our design choices, we present ablation studies of key components of our system and evaluate the impact of using mel spectrograms as the input to WaveNet instead of linguistic, duration, and F0 features. We further demonstrate that using a compact acoustic intermediate representation enables significant simplification of the WaveNet architecture.</div></details></td>
4619 4620 4621 4622 4623 4624 4625 4626 4627 4628 4629 4630 4631 4632 4633 4634 4635 4636 4637 4638 4639 4640 4641 4642 4643 4644 4645 4646 4647 4648 4649 4650 4651 4652 4653 4654 4655 4656 4657 4658 4659 4660 4661 4662 4663 4664 4665 4666
        <td>LJSpeech    </td>
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>19</td>
        <td>hifigan</td>
        <td><a href="https://paperswithcode.com/paper/hifi-gan-generative-adversarial-networks-for">HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis</a></td>
        <td><details><summary>Abstract</summary><div>Several recent work on speech synthesis have employed generative adversarial networks (GANs) to produce raw waveforms. Although such methods improve the sampling efficiency and memory usage, their sample quality has not yet reached that of autoregressive and flow-based generative models. In this work, we propose HiFi-GAN, which achieves both efficient and high-fidelity speech synthesis. As speech audio consists of sinusoidal signals with various periods, we demonstrate that modeling periodic patterns of an audio is crucial for enhancing sample quality. A subjective human evaluation (mean opinion score, MOS) of a single speaker dataset indicates that our proposed method demonstrates similarity to human quality while generating 22.05 kHz high-fidelity audio 167.9 times faster than real-time on a single V100 GPU. We further show the generality of HiFi-GAN to the mel-spectrogram inversion of unseen speakers and end-to-end speech synthesis. Finally, a small footprint version of HiFi-GAN generates samples 13.4 times faster than real-time on CPU with comparable quality to an autoregressive counterpart.</div></details></td>
        <td>CSMSC</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>20</td>
        <td>VITS</td>
        <td><a href="https://cs.paperswithcode.com/paper/conditional-variational-autoencoder-with">VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech</a></td>
        <td><details><summary>Abstract</summary><div>Several recent end-to-end text-to-speech (TTS) models enabling single-stage training and parallel sampling have been proposed, but their sample quality does not match that of two-stage TTS systems. In this work, we present a parallel end-to-end TTS method that generates more natural sounding audio than current two-stage models. Our method adopts variational inference augmented with normalizing flows and an adversarial training process, which improves the expressive power of generative modeling. We also propose a stochastic duration predictor to synthesize speech with diverse rhythms from input text. With the uncertainty modeling over latent variables and the stochastic duration predictor, our method expresses the natural one-to-many relationship in which a text input can be spoken in multiple ways with different pitches and rhythms. A subjective human evaluation (mean opinion score, or MOS) on the LJ Speech, a single speaker dataset, shows that our method outperforms the best publicly available TTS systems and achieves a MOS comparable to ground truth.</div></details></td>
        <td>CSMSC</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/csmsc/vits">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>21</td>
        <td>ERNIE-SAT</td>
        <td><a href="https://paperswithcode.com/paper/ernie-sat-speech-and-text-joint-pretraining-1">ERNIE-SAT: Speech and Text Joint Pretraining for Cross-Lingual Multi-Speaker Text-to-Speech</a></td>
        <td><details><summary>Abstract</summary><div>Speech representation learning has improved both speech understanding and speech synthesis tasks for single language. However, its ability in cross-lingual scenarios has not been explored. In this paper, we extend the pretraining method for cross-lingual multi-speaker speech synthesis tasks, including cross-lingual multi-speaker voice cloning and cross-lingual multi-speaker speech editing. We propose a speech-text joint pretraining framework, where we randomly mask the spectrogram and the phonemes given a speech example and its transcription. By learning to reconstruct the masked parts of the input in different languages, our model shows great improvements over speaker-embedding-based multi-speaker TTS methods. Moreover, our framework is end-to-end for both the training and the inference without any finetuning effort. In cross-lingual multi-speaker voice cloning and cross-lingual multi-speaker speech editing tasks, our experiments show that our model outperforms speaker-embedding-based multi-speaker TTS methods. The code and model are publicly available at PaddleSpeech.</div></details></td>
        <td>AISHELL-3 VCTK</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/aishell3_vctk/ernie_sat">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>22</td>
        <td>Whisper</td>
        <td><a href="https://cdn.openai.com/papers/whisper.pdf">Robust Speech Recognition via Large-Scale Weak Supervision</a></td>
        <td><details><summary>Abstract</summary><div>We study the capabilities of speech processingsystems trained simply to predict large amounts oftranscripts of audio on the internet. When scaledto 680,000 hours of multilingual and multitasksupervision, the resulting models generalize wellto standard benchmarks and are often competitivewith prior fully supervised results but in a zero-shot transfer setting without the need for any fine-tuning. When compared to humans, the modelsapproach their accuracy and robustness. We arereleasing models and inference code to serve asa foundation for further work on robust speechprocessing.</div></details></td>
        <td>LibriSpeech test-clea</br>n WER: 2.7%;目前不支持训练</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>23</td>
        <td>wav2vec2</td>
        <td><a href="https://paperswithcode.com/paper/wav2vec-2-0-a-framework-for-self-supervised">wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations</a></td>
        <td><details><summary>Abstract</summary><div>We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.</div></details></td>
        <td>训练数据:Librispeech trai</br>n,测试:librispeech test-clean,1.89% wer</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleSpeech/blob/develop/examples/librispeech/asr3/README.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4667
    </tr>
4668 4669 4670 4671 4672 4673 4674 4675 4676 4677 4678
</table>

### PaddleRec
<table>
    <tr>
        <th>序号</th>
        <th>模型简称</th>
        <th>论文名称(链接)</th>
        <th>摘要</th>
        <th>数据集</th>
        <th width='10%'>快速开始</th>
4679
        </th>
4680
    </tr>
C
Chen Long 已提交
4681
    <tr>
4682
        <td>1</td>
C
Chen Long 已提交
4683
        <td>DSSM</td>
4684
        <td><a href="https://paperswithcode.com/paper/learning-deep-structured-semantic-models-for">Learning Deep Structured Semantic Models for Web Search using Clickthrough Data</a></td>
C
Chen Long 已提交
4685 4686
        <td><details><summary>Abstract</summary><div>Latent semantic models, such as LSA, intend to map a query to its relevant documents at the semantic level where keyword-based matching often fails. In this study we strive to develop a series of new latent semantic models with a deep structure that project queries and documents into a common low-dimensional space where the relevance of a document given a query is readily computed as the distance between them. The proposed deep structured semantic models are discriminatively trained by maximizing the conditional likelihood of the clicked documents given a query using the clickthrough data. To make our models applicable to large-scale Web search applications, we also use a technique called word hashing, which is shown to effectively scale up our semantic models to handle large vocabularies which are common in such tasks. The new models are evaluated on a Web document ranking task using a real-world data set. Results show that our best model significantly outperforms other latent semantic models, which were considered state-of-the-art in the performance prior to the work presented in this paper</div></details></td>
        <td>BQ</td>
4687 4688
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4689 4690
    </tr>
    <tr>
4691
        <td>2</td>
C
Chen Long 已提交
4692
        <td>Match-Pyramid</td>
4693
        <td><a href="https://paperswithcode.com/paper/text-matching-as-image-recognition">Text Matching as Image Recognition</a></td>
C
Chen Long 已提交
4694 4695
        <td><details><summary>Abstract</summary><div>Matching two texts is a fundamental problem in many natural language processing tasks. An effective way is to extract meaningful matching patterns from words, phrases, and sentences to produce the matching score. Inspired by the success of convolutional neural network in image recognition, where neurons can capture many complicated patterns based on the extracted elementary visual patterns such as oriented edges and corners, we propose to model text matching as the problem of image recognition. Firstly, a matching matrix whose entries represent the similarities between words is constructed and viewed as an image. Then a convolutional neural network is utilized to capture rich matching patterns in a layer-by-layer way. We show that by resembling the compositional hierarchies of patterns in image recognition, our model can successfully identify salient signals such as n-gram and n-term matchings. Experimental results demonstrate its superiority against the baselines.</div></details></td>
        <td>Letor07</td>
4696 4697
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4698 4699
    </tr>
    <tr>
4700
        <td>3</td>
C
Chen Long 已提交
4701
        <td>MultiView-Simnet</td>
4702
        <td><a href="https://paperswithcode.com/paper/a-multi-view-deep-learning-approach-for-cross">A Multi-View Deep Learning Approach for Cross Domain User Modeling in Recommendation Systems</a></td>
C
Chen Long 已提交
4703 4704
        <td><details><summary>Abstract</summary><div>Recent online services rely heavily on automatic personalization to recommend relevant content to a large number of users. This requires systems to scale promptly to accommodate the stream of new users visiting the online services for the first time. In this work, we propose a content-based recommendation system to address both the recommendation quality and the system scalability. We propose to use a rich feature set to represent users, according to their web browsing history and search queries. We use a Deep Learning approach to map users and items to a latent space where the similarity between users and their preferred items is maximized. We extend the model to jointly learn from features of items from different domains and user features by introducing a multi-view Deep Learning model. We show how to make this rich-feature based user representation scalable by reducing the dimension of the inputs and the amount of training data. The rich user feature representation allows the model to learn relevant user behavior patterns and give useful recommendations for users who do not have any interaction with the service, given that they have adequate search and browsing history. The combination of different domains into a single model for learning helps improve the recommendation quality across all the domains, as well as having a more compact and a semantically richer user latent feature vector. We experiment with our approach on three real-world recommendation systems acquired from different sources of Microsoft products: Windows Apps recommendation, News recommendation, and Movie/TV recommendation. Results indicate that our approach is significantly better than the state-of-the-art algorithms (up to 49% enhancement on existing users and 115% enhancement on new users). In addition, experiments on a publicly open data set also indicate the superiority of our method in comparison with transitional generative topic models, for modeling cross-domain recommender systems. Scalability analysis show that our multi-view DNN model can easily scale to encompass millions of users and billions of item entries. Experimental results also confirm that combining features from all domains produces much better performance than building separate models for each domain.</div></details></td>
        <td>BQ</td>
4705 4706
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4707 4708
    </tr>
    <tr>
4709
        <td>4</td>
C
Chen Long 已提交
4710
        <td>DeepWalk</td>
4711
        <td><a href="https://paperswithcode.com/paper/deepwalk-online-learning-of-social">DeepWalk: Online Learning of Social Representations</a></td>
C
Chen Long 已提交
4712 4713
        <td><details><summary>Abstract</summary><div>We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk’s latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk’s representations can provide F1 scores up to 10% higher than competing methods when labeled data is sparse. In some experiments, DeepWalk’s representations are able to outperform all baseline methods while using 60% less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection</div></details></td>
        <td>BlogCatalog</td>
4714 4715
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4716 4717
    </tr>
    <tr>
4718
        <td>5</td>
C
Chen Long 已提交
4719
        <td>Mind</td>
4720
        <td><a href="https://paperswithcode.com/paper/multi-interest-network-with-dynamic-routing">Multi-Interest Network with Dynamic Routing for Recommendation at Tmall</a></td>
C
Chen Long 已提交
4721 4722
        <td><details><summary>Abstract</summary><div>Industrial recommender systems usually consist of the matching stage and the ranking stage, in order to handle the billion-scale of users and items. The matching stage retrieves candidate items relevant to user interests, while the ranking stage sorts candidate items by user interests. Thus, the most critical ability is to model and represent user interests for either stage. Most of the existing deep learning-based models represent one user as a single vector which is insufficient to capture the varying nature of user’s interests. In this paper, we approach this problem from a different view, to represent one user with multiple vectors encoding the different aspects of the user’s interests. We propose the Multi-Interest Network with Dynamic routing (MIND) for dealing with user’s diverse interests in the matching stage. Specifically, we design a multi-interest extractor layer based on capsule routing mechanism, which is applicable for clustering historical behaviors and extracting diverse interests. Furthermore, we develop a technique named label-aware attention to help learn a user representation with multiple vectors. Through extensive experiments on several public benchmarks and one largescale industrial dataset from Tmall, we demonstrate that MIND can achieve superior performance than state-of-the-art methods for recommendation. Currently, MIND has been deployed for handling major online traffic at the homepage on Mobile Tmall App.</div></details></td>
        <td>AmazonBook</td>
4723 4724
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4725 4726
    </tr>
    <tr>
4727
        <td>6</td>
C
Chen Long 已提交
4728 4729 4730 4731
        <td>NCF</td>
        <td><a href="https://paperswithcode.com/paper/neural-collaborative-filtering">Neural Collaborative Filtering</a></td>
        <td><details><summary>Abstract</summary><div>In recent years, deep neural networks have yielded immense success on speech recognition, computer vision and natural language processing. However, the exploration of deep neural networks on recommender systems has received relatively less scrutiny. In this work, we strive to develop techniques based on neural networks to tackle the key problem in recommendation — collaborative filtering — on the basis of implicit feedback. Although some recent work has employed deep learning for recommendation, they primarily used it to model auxiliary information, such as textual descriptions of items and acoustic features of musics. When it comes to model the key factor in collaborative filtering — the interaction between user and item features, they still resorted to matrix factorization and applied an inner product on the latent features of users and items. By replacing the inner product with a neural architecture that can learn an arbitrary function from data, we present a general framework named NCF, short for Neural networkbased Collaborative Filtering. NCF is generic and can express and generalize matrix factorization under its framework. To supercharge NCF modelling with non-linearities, we propose to leverage a multi-layer perceptron to learn the user–item interaction function. Extensive experiments on two real-world datasets show significant improvements of our proposed NCF framework over the state-of-the-art methods. Empirical evidence shows that using deeper layers of neural networks offers better recommendation performance</div></details></td>
        <td>movielens</td>
4732 4733
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4734 4735
    </tr>
    <tr>
4736
        <td>7</td>
C
Chen Long 已提交
4737
        <td>Word2vec</td>
4738
        <td><a href="https://paperswithcode.com/paper/distributed-representations-of-words-and-1">Distributed Representations of Words and Phrases and their Compositionality</a></td>
C
Chen Long 已提交
4739 4740
        <td><details><summary>Abstract</summary><div>The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of “Canada” and “Air” cannot be easily combined to obtain “Air Canada”. Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.</div></details></td>
        <td>one_billion</td>
4741 4742
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4743 4744
    </tr>
    <tr>
4745
        <td>8</td>
C
Chen Long 已提交
4746
        <td>Fasttext</td>
4747
        <td><a href="https://paperswithcode.com/paper/bag-of-tricks-for-efficient-text">Bag of Tricks for Efficient Text Classification</a></td>
C
Chen Long 已提交
4748
        <td><details><summary>Abstract</summary><div>This paper explores a simple and efficient baseline for text classification. Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation. We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.</div></details></td>
4749 4750 4751
        <td>    AG News</td>
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4752 4753
    </tr>
    <tr>
4754
        <td>9</td>
C
Chen Long 已提交
4755
        <td>GraphNeuralNetwork</td>
4756
        <td><a href="https://paperswithcode.com/paper/session-based-recommendation-with-graph">Session-based Recommendation with Graph Neural Networks</a></td>
C
Chen Long 已提交
4757 4758
        <td><details><summary>Abstract</summary><div>The problem of session-based recommendation aims to predict user actions based on anonymous sessions. Previous methods model a session as a sequence and estimate user representations besides item representations to make recommendations. Though achieved promising results, they are insufficient to obtain accurate user vectors in sessions and neglect complex transitions of items. To obtain accurate item embedding and take complex transitions of items into account, we propose a novel method, i.e. Session-based Recommendation with Graph Neural Networks, SR-GNN for brevity. In the proposed method, session sequences are modeled as graph-structured data. Based on the session graph, GNN can capture complex transitions of items, which are difficult to be revealed by previous conventional sequential methods. Each session is then represented as the composition of the global preference and the current interest of that session using an attention network. Extensive experiments conducted on two real datasets show that SR-GNN evidently outperforms the state-of-the-art session-based recommendation methods consistently.</div></details></td>
        <td>DIGINETICA和Yoochoose</td>
4759 4760
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4761 4762
    </tr>
    <tr>
4763
        <td>10</td>
C
Chen Long 已提交
4764
        <td>GRU4Rec</td>
4765
        <td><a href="https://paperswithcode.com/paper/session-based-recommendations-with-recurrent">Session-based Recommendations with Recurrent Neural Networks</a></td>
C
Chen Long 已提交
4766 4767
        <td><details><summary>Abstract</summary><div>We apply recurrent neural networks (RNN) on a new domain, namely recommender systems. Real-life recommender systems often face the problem of having to base recommendations only on short session-based data (e.g. a small sportsware website) instead of long user histories (as in the case of Netflix). In this situation the frequently praised matrix factorization approaches are not accurate. This problem is usually overcome in practice by resorting to item-to-item recommendations, i.e. recommending similar items. We argue that by modeling the whole session, more accurate recommendations can be provided. We therefore propose an RNN-based approach for session-based recommendations. Our approach also considers practical aspects of the task and introduces several modifications to classic RNNs such as a ranking loss function that make it more viable for this specific problem. Experimental results on two data-sets show marked improvements over widely used approaches.</div></details></td>
        <td>RSC15</td>
4768 4769
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4770 4771
    </tr>
    <tr>
4772
        <td>11</td>
C
Chen Long 已提交
4773
        <td>RALM</td>
4774
        <td><a href="https://paperswithcode.com/paper/real-time-attention-based-look-alike-model">Real-time Attention Based Look-alike Model for Recommender System</a></td>
C
Chen Long 已提交
4775 4776
        <td><details><summary>Abstract</summary><div>Recently, deep learning models play more and more important roles in contents recommender systems. However, although the performance of recommendations is greatly improved, the "Matthew effect" becomes increasingly evident. While the head contents get more and more popular, many competitive long-tail contents are difficult to achieve timely exposure because of lacking behavior features. This issue has badly impacted the quality and diversity of recommendations. To solve this problem, look-alike algorithm is a good choice to extend audience for high quality long-tail contents. But the traditional look-alike models which widely used in online advertising are not suitable for recommender systems because of the strict requirement of both real-time and effectiveness. This paper introduces a real-time attention based look-alike model (RALM) for recommender systems, which tackles the challenge of conflict between real-time and effectiveness. RALM realizes real-time lookalike audience extension benefiting from seeds-to-user similarity prediction and improves the effectiveness through optimizing user representation learning and look-alike learning modeling. For user representation learning, we propose a novel neural network structure named attention merge layer to replace the concatenation layer, which significantly improves the expressive ability of multifields feature learning. On the other hand, considering the various members of seeds, we design global attention unit and local attention unit to learn robust and adaptive seeds representation with respect to a certain target user. At last, we introduce seeds clustering mechanism which not only reduces the time complexity of attention units prediction but also minimizes the loss of seeds information at the same time. According to our experiments, RALM shows superior effectiveness and performance than popular lookalike models. RALM has been successfully deployed in "Top Stories" Recommender System of WeChat, leading to great improvement on diversity and quality of recommendations. As far as we know this is the first real-time look-alike model applied in recommender systems</div></details></td>
        <td>/</td>
4777 4778
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4779 4780
    </tr>
    <tr>
4781
        <td>12</td>
C
Chen Long 已提交
4782
        <td>SSR</td>
4783
        <td><a href="https://dl.acm.org/doi/10.1145/2911451.2914726">Multtti-Rate Deep Learning for Temporal Recommendation</a></td>
C
Chen Long 已提交
4784 4785
        <td><details><summary>Abstract</summary><div>Modeling temporal behavior in recommendation systems is an important and challenging problem. Its challenges come from the fact that temporal modeling increases the cost of parameter estimation and inference, while requiring large amount of data to reliably learn the model with the additional time dimensions. Therefore, it is often difficult to model temporal behavior in large-scale real-world recommendation systems. In this work, we propose a novel deep neural network based architecture that models the combination of long-term static and short-term temporal user preferences to improve the recommendation performance. To train the model efficiently for large-scale applications, we propose a novel pre-train method to reduce the number of free parameters significantly. The resulted model is applied to a real-world data set from a commercial News recommendation system. We compare to a set of established baselines and the experimental results show that our method outperforms the state-of-the-art significantly.</div></details></td>
        <td>/</td>
4786 4787
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4788 4789
    </tr>
    <tr>
4790
        <td>13</td>
C
Chen Long 已提交
4791
        <td>Youtube_dnn</td>
4792
        <td><a href="https://paperswithcode.com/paper/deep-neural-networks-for-youtube">Deep Neural Networks for YouTube Recommendations</a></td>
C
Chen Long 已提交
4793 4794
        <td><details><summary>Abstract</summary><div>YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. In this paper, we describe the system at a high level and focus on the dramatic performance improvements brought by deep learning. The paper is split according to the classic two-stage information retrieval dichotomy: first, we detail a deep candidate generation model and then describe a separate deep ranking model. We also provide practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous userfacing impact.</div></details></td>
        <td>/</td>
4795 4796
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4797 4798
    </tr>
    <tr>
4799
        <td>14</td>
C
Chen Long 已提交
4800
        <td>BST</td>
4801
        <td><a href="https://paperswithcode.com/paper/behavior-sequence-transformer-for-e-commerce">Behavior Sequence Transformer for E-commerce Recommendation in Alibaba</a></td>
C
Chen Long 已提交
4802 4803
        <td><details><summary>Abstract</summary><div>Deep learning based methods have been widely used in industrial recommendation systems (RSs). Previous works adopt an Embedding&MLP paradigm: raw features are embedded into lowdimensional vectors, which are then fed on to MLP for final recommendations. However, most of these works just concatenate different features, ignoring the sequential nature of users’ behaviors. In this paper, we propose to use the powerful Transformer model to capture the sequential signals underlying users’ behavior sequences for recommendation in Alibaba. Experimental results demonstrate the superiority of the proposed model, which is then deployed online at Taobao and obtain significant improvements in online Click-Through-Rate (CTR) comparing to two baselines.</div></details></td>
        <td>Amazon</td>
4804 4805
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4806 4807
    </tr>
    <tr>
4808
        <td>15</td>
C
Chen Long 已提交
4809
        <td>DCN</td>
4810
        <td><a href="https://paperswithcode.com/paper/deep-cross-network-for-ad-click-predictions">Deep & Cross Network for Ad Click Predictions</a></td>
C
Chen Long 已提交
4811 4812
        <td><details><summary>Abstract</summary><div>Feature engineering has been the key to the success of many prediction models. However, the process is nontrivial and o‰en requires manual feature engineering or exhaustive searching. DNNs are able to automatically learn feature interactions; however, they generate all the interactions implicitly, and are not necessarily ecient in learning all types of cross features. In this paper, we propose the Deep & Cross Network (DCN) which keeps the bene€ts of a DNN model, and beyond that, it introduces a novel cross network that is more ecient in learning certain bounded-degree feature interactions. In particular, DCN explicitly applies feature crossing at each layer, requires no manual feature engineering, and adds negligible extra complexity to the DNN model. Our experimental results have demonstrated its superiority over the state-of-art algorithms on the CTR prediction dataset and dense classi€cation dataset, in terms of both model accuracy and memory usage.</div></details></td>
        <td>Criteo</td>
4813 4814
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4815 4816
    </tr>
    <tr>
4817
        <td>16</td>
C
Chen Long 已提交
4818
        <td>DeepFM</td>
4819
        <td><a href="https://paperswithcode.com/paper/deepfm-a-factorization-machine-based-neural">DeepFM: A Factorization-Machine based Neural Network for CTR Prediction</a></td>
C
Chen Long 已提交
4820 4821
        <td><details><summary>Abstract</summary><div>Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and highorder feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide & Deep model from Google, DeepFM has a shared input to its “wide” and “deep” parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.</div></details></td>
        <td>Criteo</td>
4822 4823
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4824 4825
    </tr>
    <tr>
4826
        <td>17</td>
C
Chen Long 已提交
4827
        <td>DMR</td>
4828
        <td><a href="https://paperswithcode.com/paper/deep-match-to-rank-model-for-personalized">Deep Match to Rank Model for Personalized Click-Through Rate Prediction</a></td>
C
Chen Long 已提交
4829 4830
        <td><details><summary>Abstract</summary><div>Deep Match to Rank Model for Personalized Click-Through Rate Prediction</div></details></td>
        <td>Ali_Display_Ad_Click</td>
4831 4832
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4833 4834
    </tr>
    <tr>
4835
        <td>18</td>
C
Chen Long 已提交
4836
        <td>FFM</td>
4837
        <td><a href="https://paperswithcode.com/paper/field-aware-factorization-machines-for-ctr">Field-aware Factorization Machines for CTR Prediction</a></td>
C
Chen Long 已提交
4838 4839
        <td><details><summary>Abstract</summary><div>Click-through rate (CTR) prediction plays an important role in computational advertising. Models based on degree-2 polynomial mappings and factorization machines (FMs) are widely used for this task. Recently, a variant of FMs, field-aware factorization machines (FFMs), outperforms existing models in some world-wide CTR-prediction competitions. Based on our experiences in winning two of them, in this paper we establish FFMs as an effective method for classifying large sparse data including those from CTR prediction. First, we propose efficient implementations for training FFMs. Then we comprehensively analyze FFMs and compare this approach with competing models. Experiments show that FFMs are very useful for certain classification problems. Finally, we have released a package of FFMs for public use.</div></details></td>
        <td>Criteo</td>
4840 4841
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4842 4843
    </tr>
    <tr>
4844
        <td>19</td>
C
Chen Long 已提交
4845 4846 4847 4848
        <td>FM</td>
        <td><a href="https://www.csie.ntu.edu.tw/~b97053/paper/Rendle2010FM.pdf">Factorization machines</a></td>
        <td><details><summary>Abstract</summary><div>In this paper, we introduce Factorization Machines (FM) which are a new model class that combines the advantages of Support Vector Machines (SVM) with factorization models. Like SVMs, FMs are a general predictor working with any real valued feature vector. In contrast to SVMs, FMs model all interactions between variables using factorized parameters. Thus they are able to estimate interactions even in problems with huge sparsity (like recommender systems) where SVMs fail. We show that the model equation of FMs can be calculated in linear time and thus FMs can be optimized directly. So unlike nonlinear SVMs, a transformation in the dual form is not necessary and the model parameters can be estimated directly without the need of any support vector in the solution. We show the relationship to SVMs and the advantages of FMs for parameter estimation in sparse settings. On the other hand there are many different factorization models like matrix factorization, parallel factor analysis or specialized models like SVD++, PITF or FPMC. The drawback of these models is that they are not applicable for general prediction tasks but work only with special input data. Furthermore their model equations and optimization algorithms are derived individually for each task. We show that FMs can mimic these models just by specifying the input data (i.e. the feature vectors). This makes FMs easily applicable even for users without expert knowledge in factorization models. Index Terms—factorization machine; sparse data; tensor factorization; support vector machine</div></details></td>
        <td>Criteo</td>
4849 4850
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4851 4852
    </tr>
    <tr>
4853
        <td>20</td>
C
Chen Long 已提交
4854
        <td>GateNet</td>
4855
        <td><a href="https://paperswithcode.com/paper/gatenet-gating-enhanced-deep-network-for">GateNet: Gating-Enhanced Deep Network for Click-Through Rate Prediction</a></td>
C
Chen Long 已提交
4856 4857
        <td><details><summary>Abstract</summary><div>Advertising and feed ranking are essential to many Internet companies such as Facebook. Among many real-world advertising and feed ranking systems, click through rate (CTR) prediction plays a central role. In recent years, many neural network based CTR models have been proposed and achieved success such as Factorization-Machine Supported Neural Networks, DeepFM and xDeepFM. Many of them contain two commonly used components: embedding layer and MLP hidden layers. On the other side, gating mechanism is also widely applied in many research fields such as computer vision(CV) and natural language processing(NLP). Some research has proved that gating mechanism improves the trainability of non-convex deep neural networks. Inspired by these observations, we propose a novel model named GateNet which introduces either the feature embedding gate or the hidden gate to the embedding layer or hidden layers of DNN CTR models, respectively. The feature embedding gate provides a learnable feature gating module to select salient latent information from the feature-level. The hidden gate helps the model to implicitly capture the high-order interaction more effectively. Extensive experiments conducted on three real-world datasets demonstrate its effectiveness to boost the performance of various state-of-the-art models such as FM, DeepFM and xDeepFM on all datasets.</div></details></td>
        <td>Criteo</td>
4858 4859
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4860 4861
    </tr>
    <tr>
4862
        <td>21</td>
C
Chen Long 已提交
4863
        <td>Naml</td>
4864
        <td><a href="https://paperswithcode.com/paper/neural-news-recommendation-with-attentive">Neural News Recommendation with Attentive Multi-View Learning</a></td>
C
Chen Long 已提交
4865 4866
        <td><details><summary>Abstract</summary><div>Neural News Recommendation with Attentive Multi-View Learning</div></details></td>
        <td>microsoft news datase</br>t</td>
4867 4868
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4869 4870
    </tr>
    <tr>
4871
        <td>22</td>
C
Chen Long 已提交
4872
        <td>Wide&Deep</td>
4873
        <td><a href="https://paperswithcode.com/paper/wide-deep-learning-for-recommender-systems">Wide & Deep Learning for Recommender Systems</a></td>
C
Chen Long 已提交
4874 4875
        <td><details><summary>Abstract</summary><div>Generalized linear models with nonlinear feature transformations are widely used for large-scale regression and classification problems with sparse inputs. Memorization of feature interactions through a wide set of cross-product feature transformations are effective and interpretable, while generalization requires more feature engineering effort. With less feature engineering, deep neural networks can generalize better to unseen feature combinations through low-dimensional dense embeddings learned for the sparse features. However, deep neural networks with embeddings can over-generalize and recommend less relevant items when the user-item interactions are sparse and high-rank. In this paper, we present Wide & Deep learning—jointly trained wide linear models and deep neural networks—to combine the benefits of memorization and generalization for recommender systems. We productionized and evaluated the system on Google Play, a commercial mobile app store with over one billion active users and over one million apps. Online experiment results show that Wide & Deep significantly increased app acquisitions compared with wide-only and deep-only models. We have also open-sourced our implementation in TensorFlow.</div></details></td>
        <td>Criteo</td>
4876 4877
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4878 4879
    </tr>
    <tr>
4880
        <td>23</td>
C
Chen Long 已提交
4881
        <td>XDeepFM</td>
4882
        <td><a href="https://paperswithcode.com/paper/xdeepfm-combining-explicit-and-implicit">xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems</a></td>
C
Chen Long 已提交
4883 4884
        <td><details><summary>Abstract</summary><div>Combinatorial features are essential for the success of many commercial models. Manually crafting these features usually comes with high cost due to the variety, volume and velocity of raw data in web-scale systems. Factorization based models, which measure interactions in terms of vector product, can learn patterns of combinatorial features automatically and generalize to unseen features as well. With the great success of deep neural networks (DNNs) in various fields, recently researchers have proposed several DNN-based factorization model to learn both low- and high-order feature interactions. Despite the powerful ability of learning an arbitrary function from data, plain DNNs generate feature interactions implicitly and at the bit-wise level. In this paper, we propose a novel Compressed Interaction Network (CIN), which aims to generate feature interactions in an explicit fashion and at the vector-wise level. We show that the CIN share some functionalities with convolutional neural networks (CNNs) and recurrent neural networks (RNNs). We further combine a CIN and a classical DNN into one unified model, and named this new model eXtreme Deep Factorization Machine (xDeepFM). On one hand, the xDeepFM is able to learn certain bounded-degree feature interactions explicitly; on the other hand, it can learn arbitrary low- and high-order feature interactions implicitly. We conduct comprehensive experiments on three real-world datasets. Our results demonstrate that xDeepFM outperforms state-of-the-art models. We have released the source code of xDeepFM at https://github.com/Leavingseason/xDeepFM.</div></details></td>
        <td>Criteo</td>
4885 4886
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4887 4888
    </tr>
    <tr>
4889
        <td>24</td>
C
Chen Long 已提交
4890
        <td>AutoInt</td>
4891
        <td><a href="https://paperswithcode.com/paper/autoint-automatic-feature-interaction">AutoInt: Automatic Feature Interaction Learning via Self-Attentive Neural Networks</a></td>
C
Chen Long 已提交
4892 4893
        <td><details><summary>Abstract</summary><div>Click-through rate (CTR) prediction, which aims to predict the probability of a user clicking on an ad or an item, is critical to many online applications such as online advertising and recommender systems. The problem is very challenging since (1) the input features (e.g., the user id, user age, item id, item category) are usually sparse and high-dimensional, and (2) an effective prediction relies on highorder combinatorial features (a.k.a. cross features), which are very time-consuming to hand-craft by domain experts and are impossible to be enumerated. Therefore, there have been efforts in finding lowdimensional representations of the sparse and high-dimensional raw features and their meaningful combinations. In this paper, we propose an effective and efficient method called the AutoInt to automatically learn the high-order feature interactions of input features. Our proposed algorithm is very general, which can be applied to both numerical and categorical input features. Specifically, we map both the numerical and categorical features into the same low-dimensional space. Afterwards, a multihead self-attentive neural network with residual connections is proposed to explicitly model the feature interactions in the lowdimensional space. With different layers of the multi-head selfattentive neural networks, different orders of feature combinations of input features can be modeled. The whole model can be efficiently fit on large-scale raw data in an end-to-end fashion. Experimental results on four real-world datasets show that our proposed approach not only outperforms existing state-of-the-art approaches for prediction but also offers good explainability. Code is available at: https://github.com/DeepGraphLearning/RecommenderSystems.</div></details></td>
        <td>MovieLens</td>
4894 4895
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4896 4897
    </tr>
    <tr>
4898
        <td>25</td>
C
Chen Long 已提交
4899
        <td>AFM</td>
4900
        <td><a href="https://paperswithcode.com/paper/attentional-factorization-machines-learning">Attentional Factorization Machines: Learning the Weight of Feature Interactions via Attention Networks</a></td>
C
Chen Long 已提交
4901 4902
        <td><details><summary>Abstract</summary><div>Factorization Machines (FMs) are a supervised learning approach that enhances the linear regression model by incorporating the second-order feature interactions. Despite effectiveness, FM can be hindered by its modelling of all feature interactions with the same weight, as not all feature interactions are equally useful and predictive. For example, the interactions with useless features may even introduce noises and adversely degrade the performance. In this work, we improve FM by discriminating the importance of different feature interactions. We propose a novel model named Attentional Factorization Machine (AFM), which learns the importance of each feature interaction from data via a neural attention network. Extensive experiments on two real-world datasets demonstrate the effectiveness of AFM. Empirically, it is shown on regression task AFM betters FM with a 8.6% relative improvement, and consistently outperforms the state-of-the-art deep learning methods Wide&Deep [Cheng et al., 2016] and DeepCross[Shan et al., 2016] with a much simpler structure and fewer model parameters. Our implementation of AFM is publicly available at: https://github. com/hexiangnan/attentional factorization machine</div></details></td>
        <td>MovieLens</td>
4903 4904
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4905 4906
    </tr>
    <tr>
4907
        <td>26</td>
C
Chen Long 已提交
4908
        <td>DeepCross</td>
4909
        <td><a href="https://paperswithcode.com/paper/deep-crossing-web-scale-modeling-without">Deep Crossing: Web-Scale Modeling without Manually Crafted Combinatorial Features</a></td>
C
Chen Long 已提交
4910 4911
        <td><details><summary>Abstract</summary><div>Manually crafted combinatorial features have been the “secret sauce” behind many successful models. For web-scale applications, however, the variety and volume of features make these manually crafted features expensive to create, maintain, and deploy. This paper proposes the Deep Crossing model which is a deep neural network that automatically combines features to produce superior models. The input of Deep Crossing is a set of individual features that can be either dense or sparse. The important crossing features are discovered implicitly by the networks, which are comprised of an embedding and stacking layer, as well as a cascade of Residual Units. Deep Crossing is implemented with a modeling tool called the Computational Network Tool Kit (CNTK), powered by a multi-GPU platform. It was able to build, from scratch, two web-scale models for a major paid search engine, and achieve superior results with only a sub-set of the features used in the production models. This demonstrates the potential of using Deep Crossing as a general modeling paradigm to improve existing products, as well as to speed up the development of new models with a fraction of the investment in feature engineering and acquisition of deep domain knowledge.</div></details></td>
        <td>/</td>
4912 4913
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4914 4915
    </tr>
    <tr>
4916
        <td>27</td>
C
Chen Long 已提交
4917
        <td>DIEN</td>
4918
        <td><a href="https://paperswithcode.com/paper/deep-interest-evolution-network-for-click">Deep Interest Evolution Network for Click-Through Rate Prediction</a></td>
C
Chen Long 已提交
4919 4920
        <td><details><summary>Abstract</summary><div>Click-through rate (CTR) prediction, whose goal is to estimate the probability of a user clicking on the item, has become one of the core tasks in the advertising system. For CTR prediction model, it is necessary to capture the latent user interest behind the user behavior data. Besides, considering the changing of the external environment and the internal cognition, user interest evolves over time dynamically. There are several CTR prediction methods for interest modeling, while most of them regard the representation of behavior as the interest directly, and lack specially modeling for latent interest behind the concrete behavior. Moreover, little work considers the changing trend of the interest. In this paper, we propose a novel model, named Deep Interest Evolution Network (DIEN), for CTR prediction. Specifically, we design interest extractor layer to capture temporal interests from history behavior sequence. At this layer, we introduce an auxiliary loss to supervise interest extracting at each step. As user interests are diverse, especially in the e-commerce system, we propose interest evolving layer to capture interest evolving process that is relative to the target item. At interest evolving layer, attention mechanism is embedded into the sequential structure novelly, and the effects of relative interests are strengthened during interest evolution. In the experiments on both public and industrial datasets, DIEN significantly outperforms the state-of-the-art solutions. Notably, DIEN has been deployed in the display advertisement system of Taobao, and obtained 20.7% improvement on CTR.</div></details></td>
        <td>amazon eletronics</td>
4921 4922
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4923 4924
    </tr>
    <tr>
4925
        <td>28</td>
C
Chen Long 已提交
4926
        <td>DIN</td>
4927
        <td><a href="https://paperswithcode.com/paper/deep-interest-network-for-click-through-rate">Deep Interest Network for Click-Through Rate Prediction</a></td>
C
Chen Long 已提交
4928 4929
        <td><details><summary>Abstract</summary><div>Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.</div></details></td>
        <td>amazon eletronics</td>
4930 4931
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4932 4933
    </tr>
    <tr>
4934
        <td>29</td>
C
Chen Long 已提交
4935
        <td>FGCNN</td>
4936
        <td><a href="https://paperswithcode.com/paper/feature-generation-by-convolutional-neural">Feature Generation by Convolutional Neural Network for Click-Through Rate Prediction</a></td>
C
Chen Long 已提交
4937 4938
        <td><details><summary>Abstract</summary><div>Click-Through Rate prediction is an important task in recommender systems, which aims to estimate the probability of a user to click on a given item. Recently, many deep models have been proposed to learn low-order and high-order feature interactions from original features. However, since useful interactions are always sparse, it is difficult for DNN to learn them effectively under a large number of parameters. In real scenarios, artificial features are able to improve the performance of deep models (such as Wide & Deep Learning), but feature engineering is expensive and requires domain knowledge, making it impractical in different scenarios. Therefore, it is necessary to augment feature space automatically.In this paper, We propose a novel Feature Generation by Convolutional Neural Network (FGCNN) model with two components: Feature Generation and Deep Classifier. Feature Generation leverages the strength of CNN to generate local patterns and recombine them to generate new features. Deep Classifier adopts the structure of IPNN to learn interactions from the augmented feature space. Experimental results on three large-scale datasets show that FGCNN significantly outperforms nine state-of-the-art models. Moreover, when applying some state-of-the-art models as Deep Classifier, better performance is always achieved, showing the great compatibility of our FGCNN model. This work explores a novel direction for CTR predictions: it is quite useful to reduce the learning difficulties of DNN by automatically identifying important features.</div></details></td>
        <td>Criteo</td>
4939 4940
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4941 4942
    </tr>
    <tr>
4943
        <td>30</td>
C
Chen Long 已提交
4944
        <td>Fibinet</td>
4945
        <td><a href="https://paperswithcode.com/paper/fibinet-combining-feature-importance-and">FiBiNET: Combining Feature Importance and Bilinear feature Interaction for Click-Through Rate Prediction</a></td>
C
Chen Long 已提交
4946 4947
        <td><details><summary>Abstract</summary><div>Advertising and feed ranking are essential to many Internet companies such as Facebook and Sina Weibo. Among many real-world advertising and feed ranking systems, click through rate (CTR) prediction plays a central role. There are many proposed models in this field such as logistic regression, tree based models, factorization machine based models and deep learning based CTR models. However, many current works calculate the feature interactions in a simple way such as Hadamard product and inner product and they care less about the importance of features. In this paper, a new model named FiBiNET as an abbreviation for Feature Importance and Bilinear feature Interaction NETwork is proposed to dynamically learn the feature importance and fine-grained feature interactions. On the one hand, the FiBiNET can dynamically learn the importance of features via the Squeeze-Excitation network (SENET) mechanism; on the other hand, it is able to effectively learn the feature interactions via bilinear function. We conduct extensive experiments on two realworld datasets and show that our shallow model outperforms other shallow models such as factorization machine(FM) and field-aware factorization machine(FFM). In order to improve performance further, we combine a classical deep neural network(DNN) component with the shallow model to be a deep model. The deep FiBiNET consistently outperforms the other state-of-the-art deep models such as DeepFM and extreme deep factorization machine(XdeepFM)</div></details></td>
        <td>Criteo</td>
4948 4949
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4950 4951
    </tr>
    <tr>
4952
        <td>31</td>
C
Chen Long 已提交
4953
        <td>FLEN</td>
4954
        <td><a href="https://paperswithcode.com/paper/flen-leveraging-field-for-scalable-ctr">FLEN: Leveraging Field for Scalable CTR Prediction</a></td>
C
Chen Long 已提交
4955 4956
        <td><details><summary>Abstract</summary><div>Click-Through Rate (CTR) prediction systems are usually based on multi-field categorical features, i.e., every feature is categorical and belongs to one and only one field. Modeling feature conjunctions is crucial for CTR prediction accuracy. However, it usually requires a massive number of parameters to explicitly model all feature conjunctions, which is not scalable for real-world production systems. In this paper, we describe a novel Field-Leveraged Embedding Network (FLEN) which has been deployed in the commercial recommender systems in Meitu and serves the main traffic. FLEN devises a field-wise bi-interaction pooling technique. By suitably exploiting field information, the field-wise bi-interaction pooling layer captures both inter-field and intra-field feature conjunctions with a small number of model parameters and an acceptable time complexity for industrial applications. We show that some classic shallow CTR models can be regarded as special cases of this technique, i.e., MF, FM and FwFM. We identify a unique challenge in this technique, i.e., the FM module in our model may suffer from the coupled gradient issue, which will damage the performance of the model. To solve this challenge, we develop Dicefactor: a novel dropout method to prevent independent latent features from co-adapting. Extensive experiments, including offline evaluations and online A/B testing on real production systems, demonstrate the effectiveness and efficiency of FLEN against the state-of-the-art models. In particular, compared to the previous version deployed on the system (i.e. NFM), FLEN has obtained 5.19% improvement on CTR with 1/6 of memory usage and computation time.</div></details></td>
        <td>Avazu</td>
4957 4958
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4959 4960
    </tr>
    <tr>
4961
        <td>32</td>
C
Chen Long 已提交
4962
        <td>FNN</td>
4963
        <td><a href="https://paperswithcode.com/paper/deep-learning-over-multi-field-categorical">Deep Learning over Multi-field Categorical Data</a></td>
C
Chen Long 已提交
4964 4965
        <td><details><summary>Abstract</summary><div>Predicting user responses, such as click-through rate and conversion rate, are critical in many web applications including web search, personalised recommendation, and online advertising. Different from continuous raw features that we usually found in the image and audio domains, the input features in web space are always of multi-field and are mostly discrete and categorical while their dependencies are little known. Major user response prediction models have to either limit themselves to linear models or require manually building up high-order combination features. The former loses the ability of exploring feature interactions, while the latter results in a heavy computation in the large feature space. To tackle the issue, we propose two novel models using deep neural networks (DNNs) to automatically learn effective patterns from categorical feature interactions and make predictions of users’ ad clicks. To get our DNNs efficiently work, we propose to leverage three feature transformation methods, i.e., factorisation machines (FMs), restricted Boltzmann machines (RBMs) and denoising auto-encoders (DAEs). This paper presents the structure of our models and their efficient training algorithms. The large-scale experiments with real-world data demonstrate that our methods work better than major state-of-the-art models.</div></details></td>
        <td>Criteo</td>
4966 4967
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4968 4969
    </tr>
    <tr>
4970
        <td>33</td>
C
Chen Long 已提交
4971
        <td>NFM</td>
4972
        <td><a href="https://paperswithcode.com/paper/neural-factorization-machines-for-sparse">Neural Factorization Machines for Sparse Predictive Analytics</a></td>
C
Chen Long 已提交
4973 4974
        <td><details><summary>Abstract</summary><div>Many predictive tasks of web applications need to model categorical variables, such as user IDs and demographics like genders and occupations. To apply standard machine learning techniques, these categorical predictors are always converted to a set of binary features via one-hot encoding, making the resultant feature vector highly sparse. To learn from such sparse data effectively, it is crucial to account for the interactions between features.</div></details></td>
        <td>Yelp</td>
4975 4976
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4977 4978
    </tr>
    <tr>
4979
        <td>34</td>
C
Chen Long 已提交
4980
        <td>PNN</td>
4981
        <td><a href="https://paperswithcode.com/paper/product-based-neural-networks-for-user">Product-based Neural Networks for User Response Prediction</a></td>
C
Chen Long 已提交
4982 4983
        <td><details><summary>Abstract</summary><div>Predicting user responses, such as clicks and conversions, is of great importance and has found its usage in many Web applications including recommender systems, web search and online advertising. The data in those applications is mostly categorical and contains multiple fields; a typical representation is to transform it into a high-dimensional sparse binary feature representation via one-hot encoding. Facing with the extreme sparsity, traditional models may limit their capacity of mining shallow patterns from the data, i.e. low-order feature combinations. Deep models like deep neural networks, on the other hand, cannot be directly applied for the high-dimensional input because of the huge feature space. In this paper, we propose a Product-based Neural Networks (PNN) with an embedding layer to learn a distributed representation of the categorical data, a product layer to capture interactive patterns between interfield categories, and further fully connected layers to explore high-order feature interactions. Our experimental results on two large-scale real-world ad click datasets demonstrate that PNNs consistently outperform the state-of-the-art models on various metrics.</div></details></td>
        <td>Criteo</td>
4984 4985
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4986 4987
    </tr>
    <tr>
4988
        <td>35</td>
C
Chen Long 已提交
4989
        <td>ESMM</td>
4990
        <td><a href="https://paperswithcode.com/paper/entire-space-multi-task-model-an-effective">Entire Space Multi-Task Model: An Effective Approach for Estimating Post-Click Conversion Rate</a></td>
C
Chen Long 已提交
4991 4992
        <td><details><summary>Abstract</summary><div>Estimating post-click conversion rate (CVR) accurately is crucial for ranking systems in industrial applications such as recommendation and advertising. Conventional CVR modeling applies popular deep learning methods and achieves state-of-the-art performance. However it encounters several task-specific problems in practice, making CVR modeling challenging. For example, conventional CVR models are trained with samples of clicked impressions while utilized to make inference on the entire space with samples of all impressions. This causes a sample selection bias problem. Besides, there exists an extreme data sparsity problem, making the model fitting rather difficult. In this paper, we model CVR in a brand-new perspective by making good use of sequential pattern of user actions, i.e., impression -> click -> conversion. The proposed Entire Space Multi-task Model (ESMM) can eliminate the two problems simultaneously by i) modeling CVR directly over the entire space, ii) employing a feature representation transfer learning strategy. Experiments on dataset gathered from Taobao's recommender system demonstrate that ESMM significantly outperforms competitive methods. We also release a sampling version of this dataset to enable future research. To the best of our knowledge, this is the first public dataset which contains samples with sequential dependence of click and conversion labels for CVR modeling.</div></details></td>
        <td>Alibaba Click and Con</br>version Prediction</td>
4993 4994
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
4995 4996
    </tr>
    <tr>
4997
        <td>36</td>
C
Chen Long 已提交
4998
        <td>MMOE</td>
4999
        <td><a href="https://paperswithcode.com/paper/modeling-task-relationships-in-multi-task">Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts</a></td>
C
Chen Long 已提交
5000 5001
        <td><details><summary>Abstract</summary><div>Neural-based multi-task learning has been successfully used in many real-world large-scale applications such as recommendation systems. For example, in movie recommendations, beyond providing users movies which they tend to purchase and watch, the system might also optimize for users liking the movies afterwards. With multi-task learning, we aim to build a single model that learns these multiple goals and tasks simultaneously. However, the prediction quality of commonly used multi-task models is often sensitive to the relationships between tasks. It is therefore important to study the modeling tradeoffs between task-specific objectives and inter-task relationships. In this work, we propose a novel multi-task learning approach, Multi-gate Mixture-of-Experts (MMoE), which explicitly learns to model task relationships from data. We adapt the Mixture-of-Experts (MoE) structure to multi-task learning by sharing the expert submodels across all tasks, while also having a gating network trained to optimize each task. To validate our approach on data with different levels of task relatedness, we first apply it to a synthetic dataset where we control the task relatedness. We show that the proposed approach performs better than baseline methods when the tasks are less related. We also show that the MMoE structure results in an additional trainability benefit, depending on different levels of randomness in the training data and model initialization. Furthermore, we demonstrate the performance improvements by MMoE on real tasks including a binary classification benchmark, and a large-scale content recommendation system at Google.</div></details></td>
        <td>Census-income</td>
5002 5003
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
5004 5005
    </tr>
    <tr>
5006
        <td>37</td>
C
Chen Long 已提交
5007
        <td>PLE</td>
5008
        <td><a href="https://paperswithcode.com/paper/progressive-layered-extraction-ple-a-novel">Progressive Layered Extraction (PLE): A Novel Multi-Task Learning (MTL) Model for Personalized Recommendations</a></td>
C
Chen Long 已提交
5009 5010
        <td><details><summary>Abstract</summary><div>Multi-task learning (MTL) has been successfully applied to many recommendation applications. However, MTL models often suffer from performance degeneration with negative transfer due to the complex and competing task correlation in real-world recommender systems. Moreover, through extensive experiments across SOTA MTL models, we have observed an interesting seesaw phenomenon that performance of one task is often improved by hurting the performance of some other tasks. To address these issues, we propose a Progressive Layered Extraction (PLE) model with a novel sharing structure design. PLE separates shared components and task-specific components explicitly and adopts a progressive routing mechanism to extract and separate deeper semantic knowledge gradually, improving efficiency of joint representation learning and information routing across tasks in a general setup. We apply PLE to both complicatedly correlated and normally correlated tasks, ranging from two-task cases to multi-task cases on a real-world Tencent video recommendation dataset with 1 billion samples, and results show that PLE outperforms state-of-the-art MTL models significantly under different task correlations and task-group size. Furthermore, online evaluation of PLE on a large-scale content recommendation platform at Tencent manifests 2.23% increase in view-count and 1.84% increase in watch time compared to SOTA MTL models, which is a significant improvement and demonstrates the effectiveness of PLE. Finally, extensive offline experiments on public benchmark datasets demonstrate that PLE can be applied to a variety of scenarios besides recommendations to eliminate the seesaw phenomenon. PLE now has been deployed to the online video recommender system in Tencent successfully.</div></details></td>
        <td>Census-income</td>
5011 5012
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
5013 5014
    </tr>
    <tr>
5015
        <td>38</td>
C
Chen Long 已提交
5016 5017 5018 5019
        <td>ShareBottom</td>
        <td><a href="https://paperswithcode.com/paper/multitask-learning-a-knowledge-based-source">Multitask learning</a></td>
        <td><details><summary>Abstract</summary><div>Multitask Learning is an approach to inductive transfer that improves learning for one task by using the information contained in the training signals of other related tasks. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. In this thesis we demonstrate multitask learning for a dozen problems. We explain how multitask learning works and show that there are many opportunities for multitask learning in real domains. We show that in some cases features that would normally be used as inputs work better if used as multitask outputs instead. We present suggestions for how to get the most out of multitask learning in articial neural nets, present an algorithm for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Multitask learning improves generalization performance, can be applied in many dierent kinds of domains, and can be used with dierent learning algorithms. We conjecture there will be many opportunities for its use on real-world problems.</div></details></td>
        <td>Census-income</td>
5020 5021
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
5022 5023
    </tr>
    <tr>
5024
        <td>39</td>
C
Chen Long 已提交
5025
        <td>Maml</td>
5026
        <td><a href="https://paperswithcode.com/paper/model-agnostic-meta-learning-for-fast">Model-agnostic meta-learning for fast adaptation of deep networks</a></td>
C
Chen Long 已提交
5027 5028
        <td><details><summary>Abstract</summary><div>We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two fewshot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.</div></details></td>
        <td>Omniglot</td>
5029 5030
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
5031 5032
    </tr>
    <tr>
5033
        <td>40</td>
C
Chen Long 已提交
5034
        <td>Listwise</td>
5035
        <td><a href="https://paperswithcode.com/paper/sequential-evaluation-and-generation">Sequential Evaluation and Generation Framework for Combinatorial Recommender System</a></td>
C
Chen Long 已提交
5036 5037
        <td><details><summary>Abstract</summary><div>to the user at one time in the result page, where the correlations among the items have impact on the user behavior. In this work, we model the combinatorial recommendation as the problem of generating a sequence(ordered list) of items from a candidate set, with the target of maximizing the expected overall utility(e.g. total clicks) of the sequence. Toward solving this problem, we propose the Evaluation-Generation framework. On the one hand of this framework, an evaluation model is trained to evaluate the expected overall utility, by fully considering the user, item information and the correlations among the co-exposed items. On the other hand, generation policies based on heuristic searching or reinforcement learning are devised to generate potential high-quality sequences, from which the evaluation model select one to expose. We propose effective model architectures and learning metrics under this framework. We also offer series of offline tests to thoroughly investigate the performance of the proposed framework, as supplements to the online experiments. Our results show obvious increase in performance compared with the previous solutions.</div></details></td>
        <td>/</td>
5038 5039
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
5040 5041
    </tr>
    <tr>
5042
        <td>41</td>
C
Chen Long 已提交
5043
        <td>TDM</td>
5044
        <td><a href="https://arxiv.org/pdf/1801.02294.pdf">Learning Tree-based Deep Model for Recommender Systems</a></td>
C
Chen Long 已提交
5045 5046
        <td><details><summary>Abstract</summary><div>Model-based methods for recommender systems have been studied extensively in recent years. In systems with large corpus, however, the calculation cost for the learnt model to predict all useritem preferences is tremendous, which makes full corpus retrieval extremely difficult. To overcome the calculation barriers, models such as matrix factorization resort to inner product form (i.e., model user-item preference as the inner product of user, item latent factors) and indexes to facilitate efficient approximate k-nearest neighbor searches. However, it still remains challenging to incorporate more expressive interaction forms between user and item features, e.g., interactions through deep neural networks, because of the calculation cost. In this paper, we focus on the problem of introducing arbitrary advanced models to recommender systems with large corpus. We propose a novel tree-based method which can provide logarithmic complexity w.r.t. corpus size even with more expressive models such as deep neural networks. Our main idea is to predict user interests from coarse to fine by traversing tree nodes in a top-down fashion and making decisions for each user-node pair. We also show that the tree structure can be jointly learnt towards better compatibility with users’ interest distribution and hence facilitate both training and prediction. Experimental evaluations with two large-scale real-world datasets show that the proposed method significantly outperforms traditional methods. Online A/B test results in Taobao display advertising platform also demonstrate the effectiveness of the proposed method in production environments.</div></details></td>
        <td>/</td>
5047 5048
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
5049 5050
    </tr>
    <tr>
5051
        <td>42</td>
C
Chen Long 已提交
5052
        <td>Tagspace</td>
5053
        <td><a href="https://paperswithcode.com/paper/tagspace-semantic-embeddings-from-hashtags">TagSpace: Semantic Embeddings from Hashtags</a></td>
C
Chen Long 已提交
5054 5055
        <td><details><summary>Abstract</summary><div>We describe a convolutional neural network that learns feature representations for short textual posts using hashtags as a supervised signal. The proposed approach is trained on up to 5.5 billion words predicting 100,000 possible hashtags. As well as strong performance on the hashtag prediction task itself, we show that its learned representation of text (ignoring the hashtag labels) is useful for other tasks as well. To that end, we present results on a document recommendation task, where it also outperforms a number of baselines.</div></details></td>
        <td>ag_news</td>
5056 5057
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
5058 5059
    </tr>
    <tr>
5060
        <td>43</td>
C
Chen Long 已提交
5061
        <td>Textcnn</td>
5062
        <td><a href="https://aclanthology.org/D14-1181.pdf">Convolutional neural networks for sentence classication</a></td>
C
Chen Long 已提交
5063 5064
        <td><details><summary>Abstract</summary><div>We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.</div></details></td>
        <td>Senta</td>
5065 5066
        <td><a href="https://github.com/PaddlePaddle/PaddleRec/blob/master/README_EN.md">快速开始</a></td>
        </td>
C
Chen Long 已提交
5067 5068
    </tr>
    <tr>
5069
        <td>44</td>
5070
        <td> DIFM</td>
5071
        <td><a href="https://paperswithcode.com/paper/a-dual-input-aware-factorization-machine-for">A Dual Input-aware Factorization Machine for CTR Prediction</a></td>
5072 5073
        <td><details><summary>Abstract</summary><div>Factorization Machines (FMs) refer to a class of general predictors working with real valued feature vectors, which are well-known for their ability to estimate model parameters under significant sparsity and have found successful applications in many areas such as the click-through rate (CTR) prediction. However, standard FMs only produce a single fixed representation for each feature across different input instances, which may limit the CTR model’s expressive and predictive power. Inspired by the success of Input-aware Factorization Machines (IFMs), which aim to learn more flexible and informative representations of a given feature according to different input instances, we propose a novel model named Dual Input-aware Factorization Machines (DIFMs) that can adaptively reweight the original feature representations at the bit-wise and vector-wise levels simultaneously. Furthermore, DIFMs strategically integrate various components including Multi-Head Self-Attention, Residual Networks and DNNs into a unified end-to-end model. Comprehensive experiments on two real-world CTR prediction datasets show that the DIFM model can outperform several state-of-the-art models consistently.</div></details></td>
        <td>criteo</td>
5074 5075
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5076 5077
    </tr>
    <tr>
5078 5079 5080 5081 5082 5083 5084
        <td>45</td>
        <td>BERT4Rec</td>
        <td><a href="https://paperswithcode.com/paper/bert4rec-sequential-recommendation-with">BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer</a></td>
        <td><details><summary>Abstract</summary><div>Modeling users' dynamic and evolving preferences from their historical behaviors is challenging and crucial for recommendation systems. Previous methods employ sequential neural networks (e.g., Recurrent Neural Network) to encode users' historical interactions from left to right into hidden representations for making recommendations. Although these methods achieve satisfactory results, they often assume a rigidly ordered sequence which is not always practical. We argue that such left-to-right unidirectional architectures restrict the power of the historical sequence representations. For this purpose, we introduce a Bidirectional Encoder Representations from Transformers for sequential Recommendation (BERT4Rec). However, jointly conditioning on both left and right context in deep bidirectional model would make the training become trivial since each item can indirectly "see the target item". To address this problem, we train the bidirectional model using the Cloze task, predicting the masked items in the sequence by jointly conditioning on their left and right context. Comparing with predicting the next item at each position in a sequence, the Cloze task can produce more samples to train a more powerful bidirectional model. Extensive experiments on four benchmark datasets show that our model outperforms various state-of-the-art sequential models consistently.</div></details></td>
        <td>Beauty</td>
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5085
    </tr>
5086
    <tr>
5087 5088 5089 5090 5091 5092 5093
        <td>46</td>
        <td>FAT_DeepFFM</td>
        <td><a href="https://paperswithcode.com/paper/fat-deepffm-field-attentive-deep-field-aware">FAT-DeepFFM: Field Attentive Deep Field-aware Factorization Machine</a></td>
        <td><details><summary>Abstract</summary><div>Click through rate (CTR) estimation is a fundamental task in personalized advertising and recommender systems. Recent years have witnessed the success of both the deep learning based model and attention mechanism in various tasks in computer vision (CV) and natural language processing (NLP). How to combine the attention mechanism with deep CTR model is a promising direction because it may ensemble the advantages of both sides. Although some CTR model such as Attentional Factorization Machine (AFM) has been proposed to model the weight of second order interaction features, we posit the evaluation of feature importance before explicit feature interaction procedure is also important for CTR prediction tasks because the model can learn to selectively highlight the informative features and suppress less useful ones if the task has many input features. In this paper, we propose a new neural CTR model named Field Attentive Deep Field-aware Factorization Machine (FAT-DeepFFM) by combining the Deep Field-aware Factorization Machine (DeepFFM) with Compose-Excitation network (CENet) field attention mechanism which is proposed by us as an enhanced version of Squeeze-Excitation network (SENet) to highlight the feature importance. We conduct extensive experiments on two real-world datasets and the experiment results show that FAT-DeepFFM achieves the best performance and obtains different improvements over the state-of-the-art methods. We also compare two kinds of attention mechanisms (attention before explicit feature interaction vs. attention after explicit feature interaction) and demonstrate that the former one outperforms the latter one significantly.</div></details></td>
        <td>criteo</td>
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5094 5095
    </tr>
    <tr>
5096 5097 5098 5099 5100
        <td>47</td>
        <td>DeepRec</td>
        <td><a href="https://paperswithcode.com/paper/deeprec-an-open-source-toolkit-for-deep">DeepRec: An Open-source Toolkit for Deep Learning based Recommendation</a></td>
        <td><details><summary>Abstract</summary><div>Deep learning based recommender systems have been extensively explored in recent years. However, the large number of models proposed each year poses a big challenge for both researchers and practitioners in reproducing the results for further comparisons. Although a portion of papers provides source code, they adopted different programming languages or different deep learning packages, which also raises the bar in grasping the ideas. To alleviate this problem, we released the open source project: \textbf{DeepRec}. In this toolkit, we have implemented a number of deep learning based recommendation algorithms using Python and the widely used deep learning package - Tensorflow. Three major recommendation scenarios: rating prediction, top-N recommendation (item ranking) and sequential recommendation, were considered. Meanwhile, DeepRec maintains good modularity and extensibility to easily incorporate new models into the framework. It is distributed under the terms of the GNU General Public License. The source code is available at github: \url{https://github.com/cheungdaven/DeepRec}</div></details></td>
        <td>Netflix</td>
5101
        <td><a href="">快速开始</a></td>
5102
        </td>
C
Chen Long 已提交
5103 5104
    </tr>
    <tr>
5105 5106 5107 5108 5109 5110 5111
        <td>48</td>
        <td>ENSFM</td>
        <td><a href="https://paperswithcode.com/paper/eicient-non-sampling-factorization-machines">Eicient Non-Sampling Factorization Machines for Optimal Context-Aware Recommendation</a></td>
        <td><details><summary>Abstract</summary><div>o provide more accurate recommendation, it is a trending topic to go beyond modeling user-item interactions and take context features into account. Factorization Machines (FM) with negative sampling is a popular solution for context-aware recommendation. However, it is not robust as sampling may lost important information and usually leads to non-optimal performances in practical. Several recent e orts have enhanced FM with deep learning architectures for modelling high-order feature interactions. While they either focus on rating prediction task only, or typically adopt the negative sampling strategy for optimizing the ranking performance. Due to the dramatic uctuation of sampling, it is reasonable to argue that these sampling-based FM methods are still suboptimal for context-aware recommendation. In this paper, we propose to learn FM without sampling for ranking tasks that helps context-aware recommendation particularly. Despite e ectiveness, such a non-sampling strategy presents strong challenge in learning e ciency of the model. Accordingly, we further design a new ideal framework named E cient Non-Sampling Factorization Machines (ENSFM). ENSFM not only seamlessly connects the relationship between FM and Matrix Factorization (MF), but also resolves the challenging e ciency issue via novel memorization strategies. Through extensive experiments on three realworld public datasets, we show that 1) the proposed ENSFM consistently and signi cantly outperforms the state-of-the-art methods on context-aware Top-K recommendation, and 2) ENSFM achieves signi cant advantages in training e ciency, which makes it more applicable to real-world large-scale systems. Moreover, the empirical results indicate that a proper learning method is even more important than advanced neural network structures for Top-K recommendation task. Our implementation has been released 1 to facilitate further developments on e cient non-sampling methods</div></details></td>
        <td>ml-1m</td>
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5112 5113
    </tr>
    <tr>
5114 5115 5116 5117 5118
        <td>49</td>
        <td>TiSAS</td>
        <td><a href="https://paperswithcode.com/paper/tisasrec-time-interval-aware-self-attention">Time Interval Aware Self-Attention for Sequential Recommendation</a></td>
        <td><details><summary>Abstract</summary><div>Sequential recommender systems seek to exploit the order of users' interactions, in order to predict their next action based on the context of what they have done recently. Traditionally, Markov Chains(MCs), and more recently Recurrent Neural Networks (RNNs) and Self Attention (SA) have proliferated due to their ability to capture the dynamics of sequential patterns. However a simplifying assumption made by most of these models is to regard interaction histories as ordered sequences, without regard for the time intervals between each interaction (i.e., they model the time-order but not the actual timestamp). In this paper, we seek to explicitly model the timestamps of interactions within a sequential modeling framework to explore the influence of different time intervals on next item prediction. We propose TiSASRec (Time Interval aware Self-attention based sequential recommendation), which models both the absolute positions of items as well as the time intervals between them in a sequence. Extensive empirical studies show the features of TiSASRec under different settings and compare the performance of self-attention with different positional encodings. Furthermore, experimental results show that our method outperforms various state-of-the-art sequential models on both sparse and dense datasets and different evaluation metrics.</div></details></td>
        <td>ml-1m</td>
5119
        <td><a href="">快速开始</a></td>
5120
        </td>
C
Chen Long 已提交
5121 5122
    </tr>
    <tr>
5123 5124 5125 5126 5127
        <td>50</td>
        <td>AutoFIS</td>
        <td><a href="https://paperswithcode.com/paper/autofis-automatic-feature-interaction">Automatic Feature Interaction Selection in Factorization Models</a></td>
        <td><details><summary>Abstract</summary><div>Learning feature interactions is crucial for click-through rate (CTR) prediction in recommender systems. In most existing deep learning models, feature interactions are either manually designed or simply enumerated. However, enumerating all feature interactions brings large memory and computation cost. Even worse, useless interactions may introduce noise and complicate the training process. In this work, we propose a two-stage algorithm called Automatic Feature Interaction Selection (AutoFIS). AutoFIS can automatically identify important feature interactions for factorization models with computational cost just equivalent to training the target model to convergence. In the \emph{search stage}, instead of searching over a discrete set of candidate feature interactions, we relax the choices to be continuous by introducing the architecture parameters. By implementing a regularized optimizer over the architecture parameters, the model can automatically identify and remove the redundant feature interactions during the training process of the model. In the \emph{re-train stage}, we keep the architecture parameters serving as an attention unit to further boost the performance. Offline experiments on three large-scale datasets (two public benchmarks, one private) demonstrate that AutoFIS can significantly improve various FM based models. AutoFIS has been deployed onto the training platform of Huawei App Store recommendation service, where a 10-day online A/B test demonstrated that AutoFIS improved the DeepFM model by 20.3\% and 20.1\% in terms of CTR and CVR respectively.</div></details></td>
        <td>criteo</td>
5128
        <td><a href="">快速开始</a></td>
5129
        </td>
C
Chen Long 已提交
5130 5131
    </tr>
    <tr>
5132 5133 5134 5135 5136
        <td>51</td>
        <td>Dselect_K</td>
        <td><a href="https://paperswithcode.com/paper/dselect-k-differentiable-selection-in-the">DSelect-k: a continuously differentiable and sparse gate for MoE</a></td>
        <td><details><summary>Abstract</summary><div>The Mixture-of-Experts (MoE) architecture is showing promising results in improving parameter sharing in multi-task learning (MTL) and in scaling high-capacity neural networks. State-of-the-art MoE models use a trainable sparse gate to select a subset of the experts for each input example. While conceptually appealing, existing sparse gates, such as Top-k, are not smooth. The lack of smoothness can lead to convergence and statistical performance issues when training with gradient-based methods. In this paper, we develop DSelect-k: a continuously differentiable and sparse gate for MoE, based on a novel binary encoding formulation. The gate can be trained using first-order methods, such as stochastic gradient descent, and offers explicit control over the number of experts to select. We demonstrate the effectiveness of DSelect-k on both synthetic and real MTL datasets with up to  tasks. Our experiments indicate that DSelect-k can achieve statistically significant improvements in prediction and expert selection over popular MoE gates. Notably, on a real-world, large-scale recommender system, DSelect-k achieves over  improvement in predictive performance compared to Top-k. We provide an open-source implementation of DSelect-k.</div></details></td>
        <td>Multi_MNIST</td>
5137
        <td><a href="">快速开始</a></td>
5138
        </td>
C
Chen Long 已提交
5139 5140
    </tr>
    <tr>
5141 5142 5143 5144 5145 5146 5147
        <td>52</td>
        <td>MHCN</td>
        <td><a href="https://paperswithcode.com/paper/self-supervised-multi-channel-hypergraph">Self-Supervised Multi-Channel Hypergraph Convolutional Network for Social Recommendation</a></td>
        <td><details><summary>Abstract</summary><div>Social relations are often used to improve recommendation quality when user-item interaction data is sparse in recommender systems. Most existing social recommendation models exploit pairwise relations to mine potential user preferences. However, real-life interactions among users are very complicated and user relations can be high-order. Hypergraph provides a natural way to model complex high-order relations, while its potentials for improving social recommendation are under-explored. In this paper, we fill this gap and propose a multi-channel hypergraph convolutional network to enhance social recommendation by leveraging high-order user relations. Technically, each channel in the network encodes a hypergraph that depicts a common high-order user relation pattern via hypergraph convolution. By aggregating the embeddings learned through multiple channels, we obtain comprehensive user representations to generate recommendation results. However, the aggregation operation might also obscure the inherent characteristics of different types of high-order connectivity information. To compensate for the aggregating loss, we innovatively integrate self-supervised learning into the training of the hypergraph convolutional network to regain the connectivity information with hierarchical mutual information maximization. The experimental results on multiple real-world datasets show that the proposed model outperforms the SOTA methods, and the ablation study verifies the effectiveness of the multi-channel setting and the self-supervised task. The implementation of our model is available via https://github.com/Coder-Yu/RecQ.</div></details></td>
        <td>LastFM</td>
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5148 5149
    </tr>
    <tr>
5150 5151 5152 5153 5154 5155 5156
        <td>53</td>
        <td>DCN_V2</td>
        <td><a href="https://paperswithcode.com/paper/dcn-m-improved-deep-cross-network-for-feature">DCN V2: Improved Deep & Cross Network and Practical Lessons for Web-scale Learning to Rank Systems</a></td>
        <td><details><summary>Abstract</summary><div>Learning effective feature crosses is the key behind building recommender systems. However, the sparse and large feature space requires exhaustive search to identify effective crosses. Deep & Cross Network (DCN) was proposed to automatically and efficiently learn bounded-degree predictive feature interactions. Unfortunately, in models that serve web-scale traffic with billions of training examples, DCN showed limited expressiveness in its cross network at learning more predictive feature interactions. Despite significant research progress made, many deep learning models in production still rely on traditional feed-forward neural networks to learn feature crosses inefficiently. In light of the pros/cons of DCN and existing feature interaction learning approaches, we propose an improved framework DCN-V2 to make DCN more practical in large-scale industrial settings. In a comprehensive experimental study with extensive hyper-parameter search and model tuning, we observed that DCN-V2 approaches outperform all the state-of-the-art algorithms on popular benchmark datasets. The improved DCN-V2 is more expressive yet remains cost efficient at feature interaction learning, especially when coupled with a mixture of low-rank architecture. DCN-V2 is simple, can be easily adopted as building blocks, and has delivered significant offline accuracy and online business metrics gains across many web-scale learning to rank systems at Google.</div></details></td>
        <td>criteo</td>
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5157 5158
    </tr>
    <tr>
5159 5160 5161 5162 5163 5164 5165
        <td>54</td>
        <td>DeepFEFM</td>
        <td><a href="https://paperswithcode.com/paper/field-embedded-factorization-machines-for">Field-Embedded Factorization Machines for Click-through rate prediction</a></td>
        <td><details><summary>Abstract</summary><div>Click-through rate (CTR) prediction models are common in many online applications such as digital advertising and recommender systems. Field-Aware Factorization Machine (FFM) and Field-weighted Factorization Machine (FwFM) are state-of-the-art among the shallow models for CTR prediction. Recently, many deep learning-based models have also been proposed. Among deeper models, DeepFM, xDeepFM, AutoInt+, and FiBiNet are state-of-the-art models. The deeper models combine a core architectural component, which learns explicit feature interactions, with a deep neural network (DNN) component. We propose a novel shallow Field-Embedded Factorization Machine (FEFM) and its deep counterpart Deep Field-Embedded Factorization Machine (DeepFEFM). FEFM learns symmetric matrix embeddings for each field pair along with the usual single vector embeddings for each feature. FEFM has significantly lower model complexity than FFM and roughly the same complexity as FwFM. FEFM also has insightful mathematical properties about important fields and field interactions. DeepFEFM combines the FEFM interaction vectors learned by the FEFM component with a DNN and is thus able to learn higher order interactions. We conducted comprehensive experiments over a wide range of hyperparameters on two large publicly available real-world datasets. When comparing test AUC and log loss, the results show that FEFM and DeepFEFM outperform the existing state-of-the-art shallow and deep models for CTR prediction tasks. We have made the code of FEFM and DeepFEFM available in the DeepCTR library (https://github.com/shenweichen/DeepCTR).</div></details></td>
        <td>criteo</td>
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5166 5167
    </tr>
    <tr>
5168 5169 5170 5171 5172 5173 5174
        <td>55</td>
        <td>DLRM</td>
        <td><a href="https://paperswithcode.com/paper/190600091">Deep Learning Recommendation Model for Personalization and Recommendation Systems</a></td>
        <td><details><summary>Abstract</summary><div>With the advent of deep learning, neural network-based recommendation models have emerged as an important tool for tackling personalization and recommendation tasks. These networks differ significantly from other deep learning networks due to their need to handle categorical features and are not well studied or understood. In this paper, we develop a state-of-the-art deep learning recommendation model (DLRM) and provide its implementation in both PyTorch and Caffe2 frameworks. In addition, we design a specialized parallelization scheme utilizing model parallelism on the embedding tables to mitigate memory constraints while exploiting data parallelism to scale-out compute from the fully-connected layers. We compare DLRM against existing recommendation models and characterize its performance on the Big Basin AI platform, demonstrating its usefulness as a benchmark for future algorithmic experimentation and system co-design.</div></details></td>
        <td>criteo</td>
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5175 5176
    </tr>
    <tr>
5177 5178 5179 5180 5181 5182 5183
        <td>56</td>
        <td>SIGN</td>
        <td><a href="https://paperswithcode.com/paper/detecting-relevant-feature-interactions-for">Detecting Beneficial Feature Interactions for Recommender Systems</a></td>
        <td><details><summary>Abstract</summary><div>Feature interactions are essential for achieving high accuracy in recommender systems. Many studies take into account the interaction between every pair of features. However, this is suboptimal because some feature interactions may not be that relevant to the recommendation result, and taking them into account may introduce noise and decrease recommendation accuracy. To make the best out of feature interactions, we propose a graph neural network approach to effectively model them, together with a novel technique to automatically detect those feature interactions that are beneficial in terms of recommendation accuracy. The automatic feature interaction detection is achieved via edge prediction with an L0 activation regularization. Our proposed model is proved to be effective through the information bottleneck principle and statistical interaction theory. Experimental results show that our model (i) outperforms existing baselines in terms of accuracy, and (ii) automatically identifies beneficial feature interactions.</div></details></td>
        <td>ml-tag/auc=0.94</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
C
Chen Long 已提交
5184 5185
    </tr>
    <tr>
5186 5187 5188 5189 5190 5191 5192
        <td>57</td>
        <td>MetaHeac</td>
        <td><a href="https://paperswithcode.com/paper/learning-to-expand-audience-via-meta-hybrid">Learning to Expand Audience via Meta Hybrid Experts and Critics for Recommendation and Advertising</a></td>
        <td><details><summary>Abstract</summary><div>In recommender systems and advertising platforms, marketers always want to deliver products, contents, or advertisements to potential audiences over media channels such as display, video, or social. Given a set of audiences or customers (seed users), the audience expansion technique (look-alike modeling) is a promising solution to identify more potential audiences, who are similar to the seed users and likely to finish the business goal of the target campaign. However, look-alike modeling faces two challenges: (1) In practice, a company could run hundreds of marketing campaigns to promote various contents within completely different categories every day, e.g., sports, politics, society. Thus, it is difficult to utilize a common method to expand audiences for all campaigns. (2) The seed set of a certain campaign could only cover limited users. Therefore, a customized approach based on such a seed set is likely to be overfitting. In this paper, to address these challenges, we propose a novel two-stage framework named Meta Hybrid Experts and Critics (MetaHeac) which has been deployed in WeChat Look-alike System. In the offline stage, a general model which can capture the relationships among various tasks is trained from a meta-learning perspective on all existing campaign tasks. In the online stage, for a new campaign, a customized model is learned with the given seed set based on the general model. According to both offline and online experiments, the proposed MetaHeac shows superior effectiveness for both content marketing campaigns in recommender systems and advertising campaigns in advertising platforms. Besides, MetaHeac has been successfully deployed in WeChat for the promotion of both contents and advertisements, leading to great improvement in the quality of marketing. The code has been available at \url{https://github.com/easezyc/MetaHeac}.</div></details></td>
        <td>    Lookalike/auc=0.71</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
C
Chen Long 已提交
5193 5194
    </tr>
    <tr>
5195 5196 5197 5198 5199 5200 5201
        <td>58</td>
        <td>DSIN</td>
        <td><a href="https://paperswithcode.com/paper/deep-session-interest-network-for-click">Deep Session Interest Network for Click-Through Rate Prediction</a></td>
        <td><details><summary>Abstract</summary><div>暂无</div></details></td>
        <td>Ali_Display_Ad_Click/</br>auc=0.635</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
C
Chen Long 已提交
5202 5203
    </tr>
    <tr>
5204 5205 5206 5207 5208 5209 5210
        <td>59</td>
        <td>AITM</td>
        <td><a href="https://paperswithcode.com/paper/modeling-the-sequential-dependence-among">Modeling the Sequential Dependence among Audience Multi-step Conversions with Multi-task Learning in Targeted Display Advertising</a></td>
        <td><details><summary>Abstract</summary><div>In most real-world large-scale online applications (e.g., e-commerce or finance), customer acquisition is usually a multi-step conversion process of audiences. For example, an impression->click->purchase process is usually performed of audiences for e-commerce platforms. However, it is more difficult to acquire customers in financial advertising (e.g., credit card advertising) than in traditional advertising. On the one hand, the audience multi-step conversion path is longer. On the other hand, the positive feedback is sparser (class imbalance) step by step, and it is difficult to obtain the final positive feedback due to the delayed feedback of activation. Multi-task learning is a typical solution in this direction. While considerable multi-task efforts have been made in this direction, a long-standing challenge is how to explicitly model the long-path sequential dependence among audience multi-step conversions for improving the end-to-end conversion. In this paper, we propose an Adaptive Information Transfer Multi-task (AITM) framework, which models the sequential dependence among audience multi-step conversions via the Adaptive Information Transfer (AIT) module. The AIT module can adaptively learn what and how much information to transfer for different conversion stages. Besides, by combining the Behavioral Expectation Calibrator in the loss function, the AITM framework can yield more accurate end-to-end conversion identification. The proposed framework is deployed in Meituan app, which utilizes it to real-timely show a banner to the audience with a high end-to-end conversion rate for Meituan Co-Branded Credit Cards. Offline experimental results on both industrial and public real-world datasets clearly demonstrate that the proposed framework achieves significantly better performance compared with state-of-the-art baselines.</div></details></td>
        <td>Ali-CCP click/click a</br>uc=0.613    purchase auc=0.616</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
C
Chen Long 已提交
5211 5212
    </tr>
    <tr>
5213 5214 5215 5216 5217 5218 5219 5220 5221 5222 5223 5224 5225 5226 5227 5228 5229 5230 5231 5232 5233 5234 5235 5236 5237 5238 5239 5240 5241 5242 5243 5244 5245 5246
        <td>60</td>
        <td>IPRec</td>
        <td><a href="https://paperswithcode.com/paper/package-recommendation-with-intra-and-inter">Package Recommendation with Intra- and Inter-Package Attention Networks </a></td>
        <td><details><summary>Abstract</summary><div>With the booming of online social networks in the mobile internet, an emerging recommendation scenario has played a vital role in information acquisition for user, where users are no longer recommended with a single item or item list, but a combination of heterogeneous and diverse objects (called a package, e.g., a package including news, publisher, and friends viewing the news). Different from the conventional recommendation where users are recommended with the item itself, in package recommendation, users would show great interests on the explicitly displayed objects that could have a significant influence on the user behaviors. However, to the best of our knowledge, few effort has been made for package recommendation and existing approaches can hardly model the complex interactions of diverse objects in a package. Thus, in this paper, we make a first study on package recommendation and propose an Intra- and inter-package attention network for Package Recommendation (IPRec). Specifically, for package modeling, an intra-package attention network is put forward to capture the object-level intention of user interacting with the package, while an inter-package attention network acts as a package-level information encoder that captures collaborative features of neighboring packages. In addition, to capture users preference representation, we present a user preference learner equipped with a fine-grained feature aggregation network and coarse-grained package aggregation network. Extensive experiments on three real-world datasets demonstrate that IPRec significantly outperforms the state of the arts. Moreover, the model analysis demonstrates the interpretability of our IPRec and the characteristics of user behaviors. Codes and datasets can be obtained at https://github.com/LeeChenChen/IPRec.</div></details></td>
        <td>wechat/auc=0.693</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>61</td>
        <td>KIM</td>
        <td><a href="https://paperswithcode.com/paper/personalized-news-recommendation-with">Personalized News Recommendation with Knowledge-aware Interactive Matching</a></td>
        <td><details><summary>Abstract</summary><div>The most important task in personalized news recommendation is accurate matching between candidate news and user interest. Most of existing news recommendation methods model candidate news from its textual content and user interest from their clicked news in an independent way. However, a news article may cover multiple aspects and entities, and a user usually has different kinds of interest. Independent modeling of candidate news and user interest may lead to inferior matching between news and users. In this paper, we propose a knowledge-aware interactive matching method for news recommendation. Our method interactively models candidate news and user interest to facilitate their accurate matching. We design a knowledge-aware news co-encoder to interactively learn representations for both clicked news and candidate news by capturing their relatedness in both semantic and entities with the help of knowledge graphs. We also design a user-news co-encoder to learn candidate news-aware user interest representation and user-aware candidate news representation for better interest matching. Experiments on two real-world datasets validate that our method can effectively improve the performance of news recommendation.</div></details></td>
        <td>kim/AUC MRR nDCG5 nDC</br>G10</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>62</td>
        <td>AutoInt</td>
        <td><a href="https://paperswithcode.com/paper/autoint-automatic-feature-interaction">AutoInt: Automatic Feature Interaction Learning via Self-Attentive Neural Networks</a></td>
        <td><details><summary>Abstract</summary><div>Click-through rate (CTR) prediction, which aims to predict the probability of a user clicking on an ad or an item, is critical to many online applications such as online advertising and recommender systems. The problem is very challenging since (1) the input features (e.g., the user id, user age, item id, item category) are usually sparse and high-dimensional, and (2) an effective prediction relies on high-order combinatorial features (\textit{a.k.a.} cross features), which are very time-consuming to hand-craft by domain experts and are impossible to be enumerated. Therefore, there have been efforts in finding low-dimensional representations of the sparse and high-dimensional raw features and their meaningful combinations. In this paper, we propose an effective and efficient method called the \emph{AutoInt} to automatically learn the high-order feature interactions of input features. Our proposed algorithm is very general, which can be applied to both numerical and categorical input features. Specifically, we map both the numerical and categorical features into the same low-dimensional space. Afterwards, a multi-head self-attentive neural network with residual connections is proposed to explicitly model the feature interactions in the low-dimensional space. With different layers of the multi-head self-attentive neural networks, different orders of feature combinations of input features can be modeled. The whole model can be efficiently fit on large-scale raw data in an end-to-end fashion. Experimental results on four real-world datasets show that our proposed approach not only outperforms existing state-of-the-art approaches for prediction but also offers good explainability. Code is available at: \url{https://github.com/DeepGraphLearning/RecommenderSystems}.</div></details></td>
        <td>criteo/auc=0.8</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>63</td>
        <td>DPIN</td>
        <td><a href="https://paperswithcode.com/paper/deep-position-wise-interaction-network-for">Deep Position-wise Interaction Network for CTR Prediction</a></td>
        <td><details><summary>Abstract</summary><div>暂无</div></details></td>
        <td>Track2/auc pauc</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
C
Chen Long 已提交
5247
    </tr>
5248 5249 5250 5251
</table>

### PARL
<table>
C
Chen Long 已提交
5252
    <tr>
5253 5254 5255 5256 5257 5258
        <th>序号</th>
        <th>模型简称</th>
        <th>论文名称(链接)</th>
        <th>摘要</th>
        <th>数据集</th>
        <th width='10%'>快速开始</th>
5259
        </th>
C
Chen Long 已提交
5260 5261
    </tr>
    <tr>
5262 5263 5264 5265 5266
        <td>1</td>
        <td>Prioritized_DQN</td>
        <td><a href="https://paperswithcode.com/paper/prioritized-experience-replay">Prioritized Experience Replay</a></td>
        <td><details><summary>Abstract</summary><div>Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.</div></details></td>
        <td>reward</td>
5267 5268
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5269 5270
    </tr>
    <tr>
5271 5272
        <td>2</td>
        <td>PPO</td>
5273
        <td><a href="https://paperswithcode.com/paper/proximal-policy-optimization-algorithms">Proximal Policy Optimization Algorithms</a></td>
5274 5275
        <td><details><summary>Abstract</summary><div>We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.</div></details></td>
        <td>reward</td>
5276 5277
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5278 5279
    </tr>
    <tr>
5280 5281
        <td>3</td>
        <td>GA3C</td>
5282
        <td><a href="无">GA3C: GPU-based A3C for Deep Reinforcement Learning</a></td>
5283 5284
        <td><details><summary>Abstract</summary><div>无</div></details></td>
        <td>reward</td>
5285 5286
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5287 5288
    </tr>
    <tr>
5289 5290
        <td>4</td>
        <td>SAC</td>
5291
        <td><a href="https://paperswithcode.com/paper/soft-actor-critic-off-policy-maximum-entropy">Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor</a></td>
5292 5293 5294
        <td><details><summary>Abstract</summary><div>Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy. That is, to succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.</div></details></td>
        <td>reward</td>
        <td><a href="https://github.com/PaddlePaddle/PARL/tree/develop/examples/SAC">快速开始</a></td>
5295
        </td>
C
Chen Long 已提交
5296 5297
    </tr>
    <tr>
5298 5299
        <td>5</td>
        <td>IMPALA</td>
5300
        <td><a href="https://paperswithcode.com/paper/impala-scalable-distributed-deep-rl-with">Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures</a></td>
5301 5302
        <td><details><summary>Abstract</summary><div>In this work we aim to solve a large collection of tasks using a single reinforcement learning agent with a single set of parameters. A key challenge is to handle the increased amount of data and extended training time. We have developed a new distributed agent IMPALA (Importance Weighted Actor-Learner Architecture) that not only uses resources more efficiently in single-machine training but also scales to thousands of machines without sacrificing data efficiency or resource utilisation. We achieve stable learning at high throughput by combining decoupled acting and learning with a novel off-policy correction method called V-trace. We demonstrate the effectiveness of IMPALA for multi-task reinforcement learning on DMLab-30 (a set of 30 tasks from the DeepMind Lab environment (Beattie et al., 2016)) and Atari-57 (all available Atari games in Arcade Learning Environment (Bellemare et al., 2013a)). Our results show that IMPALA is able to achieve better performance than previous agents with less data, and crucially exhibits positive transfer between tasks as a result of its multi-task approach.</div></details></td>
        <td>reward</td>
5303 5304
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5305 5306
    </tr>
    <tr>
5307 5308
        <td>6</td>
        <td>DDPG</td>
5309
        <td><a href="https://paperswithcode.com/paper/continuous-control-with-deep-reinforcement">Continuous control with deep reinforcement learning</a></td>
5310 5311
        <td><details><summary>Abstract</summary><div>We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.</div></details></td>
        <td>reward</td>
5312 5313
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5314 5315
    </tr>
    <tr>
5316 5317 5318 5319 5320
        <td>7</td>
        <td>PolicyGradient</td>
        <td><a href="https://paperswithcode.com/method/reinforce">REINFORCE</a></td>
        <td><details><summary>Abstract</summary><div>REINFORCE is a Monte Carlo variant of a policy gradient algorithm in reinforcement learning. </div></details></td>
        <td>reward</td>
5321 5322
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5323 5324
    </tr>
    <tr>
5325 5326
        <td>8</td>
        <td>NeurIPS2019-Learn-to-</br>Move-Challenge</td>
5327
        <td><a href="无">同 NeurIPS2018-AI-for-Prosthetics-Challenge</a></td>
5328 5329
        <td><details><summary>Abstract</summary><div>无</div></details></td>
        <td>reward</td>
5330 5331
        <td><a href="">快速开始</a></td>
        </td>
5332 5333 5334 5335
    </tr>
    <tr>
        <td>9</td>
        <td>TD3</td>
5336
        <td><a href="https://paperswithcode.com/paper/addressing-function-approximation-error-in">Addressing Function Approximation Error in Actor-Critic Methods</a></td>
5337 5338
        <td><details><summary>Abstract</summary><div>In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies. We show that this problem persists in an actor-critic setting and propose novel mechanisms to minimize its effects on both the actor and the critic. Our algorithm builds on Double Q-learning, by taking the minimum value between a pair of critics to limit overestimation. We draw the connection between target networks and overestimation bias, and suggest delaying policy updates to reduce per-update error and further improve performance. We evaluate our method on the suite of OpenAI gym tasks, outperforming the state of the art in every environment tested.</div></details></td>
        <td>reward</td>
5339 5340
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5341 5342
    </tr>
    <tr>
5343 5344
        <td>10</td>
        <td>DQN</td>
5345
        <td><a href="https://paperswithcode.com/paper/human-level-control-through-deep">Human-level Control Through Deep Reinforcement Learning</a></td>
5346 5347
        <td><details><summary>Abstract</summary><div>The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.</div></details></td>
        <td>reward</td>
5348 5349
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5350 5351
    </tr>
    <tr>
5352 5353
        <td>11</td>
        <td>ES</td>
5354
        <td><a href="https://paperswithcode.com/paper/evolution-strategies-as-a-scalable">Evolution Strategies as a Scalable Alternative to Reinforcement Learning</a></td>
5355 5356
        <td><details><summary>Abstract</summary><div>We explore the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients. Experiments on MuJoCo and Atari show that ES is a viable solution strategy that scales extremely well with the number of CPUs available: By using a novel communication strategy based on common random numbers, our ES implementation only needs to communicate scalars, making it possible to scale to over a thousand parallel workers. This allows us to solve 3D humanoid walking in 10 minutes and obtain competitive results on most Atari games after one hour of training. In addition, we highlight several advantages of ES as a black box optimization technique: it is invariant to action frequency and delayed rewards, tolerant of extremely long horizons, and does not need temporal discounting or value function approximation. </div></details></td>
        <td>reward</td>
5357 5358
        <td><a href="">快速开始</a></td>
        </td>
5359 5360 5361 5362
    </tr>
    <tr>
        <td>12</td>
        <td>DQN_variant</td>
5363
        <td><a href="https://paperswithcode.com/paper/deep-reinforcement-learning-with-double-q">Deep Reinforcement Learning with Double Q-learning</a></td>
5364 5365
        <td><details><summary>Abstract</summary><div>The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.</div></details></td>
        <td>reward</td>
5366 5367
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5368 5369
    </tr>
    <tr>
5370 5371
        <td>13</td>
        <td>A2C</td>
5372
        <td><a href="https://paperswithcode.com/paper/asynchronous-methods-for-deep-reinforcement">A2C is a synchronous, deterministic variant of Asynchronous Advantage Actor Critic (A3C) </a></td>
5373 5374 5375
        <td><details><summary>Abstract</summary><div>We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.</div></details></td>
        <td>reward</td>
        <td><a href="https://github.com/PaddlePaddle/PARL/tree/develop/examples/A2C">快速开始</a></td>
5376
        </td>
5377 5378 5379 5380
    </tr>
    <tr>
        <td>14</td>
        <td>NeurIPS2018-AI-for-Pr</br>osthetics-Challenge</td>
5381
        <td><a href="无">Efficient and Robust Learning on Elaborated Gaits with Curriculum Learning</a></td>
5382 5383 5384
        <td><details><summary>Abstract</summary><div>无</div></details></td>
        <td>reward</td>
        <td><a href="https://github.com/PaddlePaddle/PARL/tree/develop/examples/NeurIPS2018-AI-for-Prosthetics-Challenge">快速开始</a></td>
5385
        </td>
5386 5387 5388 5389
    </tr>
    <tr>
        <td>15</td>
        <td>MADDPG</td>
5390
        <td><a href="https://paperswithcode.com/paper/multi-agent-actor-critic-for-mixed">Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments</a></td>
5391 5392 5393
        <td><details><summary>Abstract</summary><div>We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multi-agent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies.</div></details></td>
        <td>reward</td>
        <td><a href="https://github.com/PaddlePaddle/PARL/tree/develop/examples/MADDPG">快速开始</a></td>
5394
        </td>
C
Chen Long 已提交
5395 5396
    </tr>
    <tr>
5397 5398
        <td>16</td>
        <td>AlphaZero</td>
5399
        <td><a href="无">Learning to Play Othello Without Human Knowledge</a></td>
5400 5401
        <td><details><summary>Abstract</summary><div>Game playing is a popular area within the field of artificial intelligence. Most agents in literature have hand-crafted features and are often trained on datasets obtained from expert human play. We implement a self- play based algorithm using neural networks for policy estimation and Monte Carlo Tree Search for policy im- provement, with no input human knowledge that learns to play Othello. We evaluate our learning algorithm for 6x6 and 8x8 versions of the game of Othello. Our work is compared with random and greedy baselines, as well as a minimax agent that uses a hand-crafted scoring function, and achieves impressive results. Further, our agent for the 6x6 version of Othello easily outperforms humans when tested against it.</div></details></td>
        <td>reward</td>
5402 5403
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5404 5405
    </tr>
    <tr>
5406 5407
        <td>17</td>
        <td>CARLA_SAC</td>
5408
        <td><a href="同 SAC">Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor</a></td>
5409 5410 5411
        <td><details><summary>Abstract</summary><div>同 SAC</div></details></td>
        <td>reward</td>
        <td><a href="https://github.com/PaddlePaddle/PARL/tree/develop/examples/CARLA_SAC">快速开始</a></td>
5412
        </td>
5413 5414 5415 5416
    </tr>
    <tr>
        <td>18</td>
        <td>NeurIPS2020 L2RPN Cha</br>llenge</td>
5417
        <td><a href="https://paperswithcode.com/paper/action-set-based-policy-optimization-for-safe">Action Set Based Policy Optimization for Safe Power Grid Management</a></td>
5418 5419 5420
        <td><details><summary>Abstract</summary><div>Maintaining the stability of the modern power grid is becoming increasingly difficult due to fluctuating power consumption, unstable power supply coming from renewable energies, and unpredictable accidents such as man-made and natural disasters. As the operation on the power grid must consider its impact on future stability, reinforcement learning (RL) has been employed to provide sequential decision-making in power grid management. However, existing methods have not considered the environmental constraints. As a result, the learned policy has risk of selecting actions that violate the constraints in emergencies, which will escalate the issue of overloaded power lines and lead to large-scale blackouts. In this work, we propose a novel method for this problem, which builds on top of the search-based planning algorithm. At the planning stage, the search space is limited to the action set produced by the policy. The selected action strictly follows the constraints by testing its outcome with the simulation function provided by the system. At the learning stage, to address the problem that gradients cannot be propagated to the policy, we introduce Evolutionary Strategies (ES) with black-box policy optimization to improve the policy directly, maximizing the returns of the long run. In NeurIPS 2020 Learning to Run Power Network (L2RPN) competition, our solution safely managed the power grid and ranked first in both tracks.</div></details></td>
        <td>reward</td>
        <td><a href="https://github.com/PaddlePaddle/PARL/tree/develop/examples/NeurIPS2020-Learning-to-Run-a-Power-Network-Challenge">快速开始</a></td>
5421
        </td>
C
Chen Long 已提交
5422 5423
    </tr>
    <tr>
5424 5425
        <td>19</td>
        <td>OAC</td>
5426
        <td><a href="https://paperswithcode.com/paper/better-exploration-with-optimistic-actor-1">Better Exploration with Optimistic Actor-Critic</a></td>
5427 5428 5429
        <td><details><summary>Abstract</summary><div>Actor-critic methods, a type of model-free Reinforcement Learning, have been successfully applied to challenging tasks in continuous control, often achieving state-of-the art performance. However, wide-scale adoption of these methods in real-world domains is made difficult by their poor sample efficiency. We address this problem both theoretically and empirically. On the theoretical side, we identify two phenomena preventing efficient exploration in existing state-of-the-art algorithms such as Soft Actor Critic. First, combining a greedy actor update with a pessimistic estimate of the critic leads to the avoidance of actions that the agent does not know about, a phenomenon we call pessimistic underexploration. Second, current algorithms are directionally uninformed, sampling actions with equal probability in opposite directions from the current mean. This is wasteful, since we typically need actions taken along certain directions much more than others. To address both of these phenomena, we introduce a new algorithm, Optimistic Actor Critic, which approximates a lower and upper confidence bound on the state-action value function. This allows us to apply the principle of optimism in the face of uncertainty to perform directed exploration using the upper bound while still using the lower bound to avoid overestimation. We evaluate OAC in several challenging continuous control tasks, achieving state-of the art sample efficiency.</div></details></td>
        <td>reward</td>
        <td><a href="https://github.com/PaddlePaddle/PARL/tree/develop/examples/OAC">快速开始</a></td>
5430
        </td>
C
Chen Long 已提交
5431 5432
    </tr>
    <tr>
5433 5434
        <td>20</td>
        <td>QMIX</td>
5435
        <td><a href="https://paperswithcode.com/paper/the-starcraft-multi-agent-challenge">The StarCraft Multi-Agent Challenge</a></td>
5436 5437 5438
        <td><details><summary>Abstract</summary><div>In the last few years, deep multi-agent reinforcement learning (RL) has become a highly active area of research. A particularly challenging class of problems in this area is partially observable, cooperative, multi-agent learning, in which teams of agents must learn to coordinate their behaviour while conditioning only on their private observations. This is an attractive research area since such problems are relevant to a large number of real-world systems and are also more amenable to evaluation than general-sum problems.Standardised environments such as the ALE and MuJoCo have allowed singleagent RL to move beyond toy domains, such as grid worlds. However, there is no comparable benchmark for cooperative multi-agent RL. As a result, most papers in this field use one-off toy problems, making it difficult to measure real progress. In this paper, we propose the StarCraft Multi-Agent Challenge (SMAC) as a benchmark problem to fill this gap.1 SMAC is based on the popular real-time strategy game StarCraft II and focuses on micromanagement challenges where each unit is controlled by an independent agent that must act based on local observations. We offer a diverse set of challenge scenarios and recommendations for best practices in benchmarking and evaluations. We also open-source a deep multi-agent RL learning framework including state-of-the-art algorithms.2 We believe that SMAC can provide a standard benchmark environment for years to come. Videos of our best agents for several SMAC scenarios are available at: https://youtu.be/VZ7zmQ_obZ0.</div></details></td>
        <td>reward</td>
        <td><a href="https://github.com/PaddlePaddle/PARL/tree/develop/examples/QMIX">快速开始</a></td>
5439 5440 5441 5442 5443 5444 5445 5446 5447 5448
        </td>
    </tr>
    <tr>
        <td>21</td>
        <td>Prioritized_DQN</td>
        <td><a href="https://paperswithcode.com/paper/prioritized-experience-replay">Prioritized Experience Replay</a></td>
        <td><details><summary>Abstract</summary><div>Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.</div></details></td>
        <td>reward</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
C
Chen Long 已提交
5449
    </tr>
5450 5451 5452 5453
</table>

### PGL
<table>
C
Chen Long 已提交
5454
    <tr>
5455 5456 5457 5458 5459 5460
        <th>序号</th>
        <th>模型简称</th>
        <th>论文名称(链接)</th>
        <th>摘要</th>
        <th>数据集</th>
        <th width='10%'>快速开始</th>
5461
        </th>
C
Chen Long 已提交
5462 5463
    </tr>
    <tr>
5464
        <td>1</td>
C
Chen Long 已提交
5465
        <td>GaAN</td>
5466
        <td><a href="https://paperswithcode.com/paper/gaan-gated-attention-networks-for-learning-on#code">GaAN: Gated Attention Networks for Learning on Large and Spatiotemporal Graphs</a></td>
C
Chen Long 已提交
5467 5468
        <td><details><summary>Abstract</summary><div>We propose a new network architecture, Gated Attention Networks (GaAN), for learning on graphs. Unlike the traditional multi-head attention mechanism, which equally consumes all attention heads, GaAN uses a convolutional sub-network to control each attention head's importance. We demonstrate the effectiveness of GaAN on the inductive node classification problem. Moreover, with GaAN as a building block, we construct the Graph Gated Recurrent Unit (GGRU) to address the traffic speed forecasting problem. Extensive experiments on three real-world datasets show that our GaAN framework achieves state-of-the-art results on both tasks.</div></details></td>
        <td>Acc</td>
5469 5470
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5471 5472
    </tr>
    <tr>
5473
        <td>2</td>
C
Chen Long 已提交
5474
        <td>stgcn</td>
5475
        <td><a href="https://paperswithcode.com/paper/spatio-temporal-graph-convolutional-networks">Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting</a></td>
C
Chen Long 已提交
5476
        <td><details><summary>Abstract</summary><div>Timely accurate traffic forecast is crucial for urban traffic control and guidance. Due to the high nonlinearity and complexity of traffic flow, traditional methods cannot satisfy the requirements of mid-and-long term prediction tasks and often neglect spatial and temporal dependencies. In this paper, we propose a novel deep learning framework, Spatio-Temporal Graph Convolutional Networks (STGCN), to tackle the time series prediction problem in traffic domain. Instead of applying regular convolutional and recurrent units, we formulate the problem on graphs and build the model with complete convolutional structures, which enable much faster training speed with fewer parameters. Experiments show that our model STGCN effectively captures comprehensive spatio-temporal correlations through modeling multi-scale traffic networks and consistently outperforms state-of-the-art baselines on various real-world traffic datasets</div></details></td>
5477 5478 5479
        <td>暂无</td>
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5480 5481
    </tr>
    <tr>
5482
        <td>3</td>
C
Chen Long 已提交
5483
        <td>graphsage</td>
5484
        <td><a href="https://paperswithcode.com/paper/inductive-representation-learning-on-large">Inductive Representation Learning on Large Graphs</a></td>
C
Chen Long 已提交
5485 5486
        <td><details><summary>Abstract</summary><div>Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.</div></details></td>
        <td>Acc</td>
5487 5488
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5489 5490
    </tr>
    <tr>
5491
        <td>4</td>
C
Chen Long 已提交
5492
        <td>metapath2vec</td>
5493
        <td><a href="https://paperswithcode.com/paper/metapath2vec-scalable-representation-learning">metapath2vec: Scalable Representation Learning for Heterogeneous Networks</a></td>
C
Chen Long 已提交
5494 5495
        <td><details><summary>Abstract</summary><div>We study the problem of representation learning in heterogeneous networks. Its unique challenges come from the existence of multiple types of nodes and links, which limit the feasibility of the conventional network embedding techniques. We develop two scalable representation learning models, namely metapath2vec and metapath2vec++. The metapath2vec model formalizes meta-path-based random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. The metapath2vec++ model further enables the simultaneous modeling of structural and semantic correlations in heterogeneous networks. Extensive experiments show that metapath2vec and metapath2vec++ are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, such as node classification, clustering, and similarity search, but also discern the structural and semantic correlations between diverse network objects.</div></details></td>
        <td>Acc</td>
5496 5497
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5498 5499
    </tr>
    <tr>
5500
        <td>5</td>
C
Chen Long 已提交
5501 5502 5503 5504
        <td>SAGPool</td>
        <td><a href="https://paperswithcode.com/paper/self-attention-graph-pooling">Self-Attention Graph Pooling</a></td>
        <td><details><summary>Abstract</summary><div>Advanced methods of applying deep learning to structured data such as graphs have been proposed in recent years. In particular, studies have focused on generalizing convolutional neural networks to graph data, which includes redefining the convolution and the downsampling (pooling) operations for graphs. The method of generalizing the convolution operation to graphs has been proven to improve performance and is widely used. However, the method of applying downsampling to graphs is still difficult to perform and has room for improvement. In this paper, we propose a graph pooling method based on self-attention. Self-attention using graph convolution allows our pooling method to consider both node features and graph topology. To ensure a fair comparison, the same training procedures and model architectures were used for the existing pooling methods and our method. The experimental results demonstrate that our method achieves superior graph classification performance on the benchmark datasets using a reasonable number of parameters.</div></details></td>
        <td>Acc</td>
5505 5506
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5507 5508
    </tr>
    <tr>
5509
        <td>6</td>
C
Chen Long 已提交
5510
        <td>line</td>
5511
        <td><a href="https://paperswithcode.com/paper/line-large-scale-information-network">LINE: Large-scale Information Network Embedding</a></td>
C
Chen Long 已提交
5512 5513
        <td><details><summary>Abstract</summary><div>This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction. Most existing graph embedding methods do not scale for real world information networks which usually contain millions of nodes. In this paper, we propose a novel network embedding method called the "LINE," which is suitable for arbitrary types of information networks: undirected, directed, and/or weighted. The method optimizes a carefully designed objective function that preserves both the local and global network structures. An edge-sampling algorithm is proposed that addresses the limitation of the classical stochastic gradient descent and improves both the effectiveness and the efficiency of the inference. Empirical experiments prove the effectiveness of the LINE on a variety of real-world information networks, including language networks, social networks, and citation networks. The algorithm is very efficient, which is able to learn the embedding of a network with millions of vertices and billions of edges in a few hours on a typical single machine. The source code of the LINE is available online.</div></details></td>
        <td>Acc</td>
5514 5515
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5516 5517
    </tr>
    <tr>
5518
        <td>7</td>
C
Chen Long 已提交
5519 5520 5521 5522
        <td>dgi</td>
        <td><a href="https://paperswithcode.com/paper/deep-graph-infomax">Deep Graph Infomax</a></td>
        <td><details><summary>Abstract</summary><div>We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner. DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs---both derived using established graph convolutional network architectures. The learnt patch representations summarize subgraphs centered around nodes of interest, and can thus be reused for downstream node-wise learning tasks. In contrast to most prior approaches to unsupervised learning with GCNs, DGI does not rely on random walk objectives, and is readily applicable to both transductive and inductive learning setups. We demonstrate competitive performance on a variety of node classification benchmarks, which at times even exceeds the performance of supervised learning.</div></details></td>
        <td>Acc</td>
5523 5524
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5525 5526
    </tr>
    <tr>
5527
        <td>8</td>
C
Chen Long 已提交
5528
        <td>sgc</td>
5529
        <td><a href="https://paperswithcode.com/paper/simplifying-graph-convolutional-networks">Simplifying Graph Convolutional Networks</a></td>
C
Chen Long 已提交
5530 5531
        <td><details><summary>Abstract</summary><div>Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN.</div></details></td>
        <td>Acc</td>
5532 5533
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5534 5535
    </tr>
    <tr>
5536
        <td>9</td>
C
Chen Long 已提交
5537
        <td>gcn</td>
5538
        <td><a href="https://paperswithcode.com/method/gcn">Semi-Supervised Classification with Graph Convolutional Networks</a></td>
C
Chen Long 已提交
5539 5540
        <td><details><summary>Abstract</summary><div>We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.</div></details></td>
        <td>Acc</td>
5541 5542
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5543 5544
    </tr>
    <tr>
5545
        <td>10</td>
C
Chen Long 已提交
5546
        <td>gin</td>
5547
        <td><a href="https://paperswithcode.com/paper/how-powerful-are-graph-neural-networks">How Powerful are Graph Neural Networks?</a></td>
C
Chen Long 已提交
5548 5549
        <td><details><summary>Abstract</summary><div>Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance.</div></details></td>
        <td>Acc</td>
5550 5551
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5552 5553
    </tr>
    <tr>
5554
        <td>11</td>
C
Chen Long 已提交
5555
        <td>strucvec</td>
5556
        <td><a href="https://paperswithcode.com/paper/struc2vec-learning-node-representations-from">struc2vec: Learning Node Representations from Structural Identity</a></td>
C
Chen Long 已提交
5557 5558
        <td><details><summary>Abstract</summary><div>Structural identity is a concept of symmetry in which network nodes are identified according to the network structure and their relationship to other nodes. Structural identity has been studied in theory and practice over the past decades, but only recently has it been addressed with representational learning techniques. This work presents struc2vec, a novel and flexible framework for learning latent representations for the structural identity of nodes. struc2vec uses a hierarchy to measure node similarity at different scales, and constructs a multilayer graph to encode structural similarities and generate structural context for nodes. Numerical experiments indicate that state-of-the-art techniques for learning node representations fail in capturing stronger notions of structural identity, while struc2vec exhibits much superior performance in this task, as it overcomes limitations of prior approaches. As a consequence, numerical experiments indicate that struc2vec improves performance on classification tasks that depend more on structural identity.</div></details></td>
        <td>Acc</td>
5559 5560
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5561 5562
    </tr>
    <tr>
5563
        <td>12</td>
C
Chen Long 已提交
5564
        <td>node2vec</td>
5565
        <td><a href="https://paperswithcode.com/paper/node2vec-scalable-feature-learning-for">node2vec: Scalable Feature Learning for Networks</a></td>
C
Chen Long 已提交
5566 5567
        <td><details><summary>Abstract</summary><div>Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.</div></details></td>
        <td>MacroF1</td>
5568 5569
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5570 5571
    </tr>
    <tr>
5572
        <td>13</td>
C
Chen Long 已提交
5573
        <td>GATNE</td>
5574
        <td><a href="https://paperswithcode.com/paper/190501669">Representation Learning for Attributed Multiplex Heterogeneous Network</a></td>
C
Chen Long 已提交
5575 5576
        <td><details><summary>Abstract</summary><div>Network embedding (or graph embedding) has been widely used in many real-world applications. However, existing methods mainly focus on networks with single-typed nodes/edges and cannot scale well to handle large networks. Many real-world networks consist of billions of nodes and edges of multiple types, and each node is associated with different attributes. In this paper, we formalize the problem of embedding learning for the Attributed Multiplex Heterogeneous Network and propose a unified framework to address this problem. The framework supports both transductive and inductive learning. We also give the theoretical analysis of the proposed framework, showing its connection with previous works and proving its better expressiveness. We conduct systematical evaluations for the proposed framework on four different genres of challenging datasets: Amazon, YouTube, Twitter, and Alibaba. Experimental results demonstrate that with the learned embeddings from the proposed framework, we can achieve statistically significant improvements (e.g., 5.99-28.23% lift by F1 scores; p<<0.01, t-test) over previous state-of-the-art methods for link prediction. The framework has also been successfully deployed on the recommendation system of a worldwide leading e-commerce company, Alibaba Group. Results of the offline A/B tests on product recommendation further confirm the effectiveness and efficiency of the framework in practice.</div></details></td>
        <td>AUC</td>
5577 5578
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5579 5580
    </tr>
    <tr>
5581
        <td>14</td>
C
Chen Long 已提交
5582
        <td>deeper_gcn</td>
5583
        <td><a href="无">DeeperGCN: All You Need to Train Deeper GCNs</a></td>
C
Chen Long 已提交
5584 5585
        <td><details><summary>Abstract</summary><div>Graph Convolutional Networks (GCNs) have been drawing significant attention with the power of representation learning on graphs. Unlike Convolutional Neural Networks (CNNs), which are able to take advantage of stacking very deep layers, GCNs suffer from vanishing gradient, over-smoothing and over-fitting issues when going deeper. These challenges limit the representation power of GCNs on large-scale graphs. This paper proposes DeeperGCN that is capable of successfully and reliably training very deep GCNs. We define differentiable generalized aggregation functions to unify different message aggregation operations (e.g. mean, max). We also propose a novel normalization layer namely MsgNorm and a pre-activation version of residual connections for GCNs. Extensive experiments on Open Graph Benchmark (OGB) show DeeperGCN significantly boosts performance over the state-of-the-art on the large scale graph learning tasks of node property prediction and graph property prediction. Please visit this https URL for more information.</div></details></td>
        <td>Acc</td>
5586 5587
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5588 5589
    </tr>
    <tr>
5590
        <td>15</td>
C
Chen Long 已提交
5591
        <td>ges</td>
5592
        <td><a href="https://paperswithcode.com/paper/billion-scale-commodity-embedding-for-e">Billion-scale Commodity Embedding for E-commerce Recommendation in Alibaba</a></td>
C
Chen Long 已提交
5593
        <td><details><summary>Abstract</summary><div>Recommender systems (RSs) have been the most important technology for increasing the business in Taobao, the largest online consumer-to-consumer (C2C) platform in China. The billion-scale data in Taobao creates three major challenges to Taobao's RS: scalability, sparsity and cold start. In this paper, we present our technical solutions to address these three challenges. The methods are based on the graph embedding framework. We first construct an item graph from users' behavior history. Each item is then represented as a vector using graph embedding. The item embeddings are employed to compute pairwise similarities between all items, which are then used in the recommendation process. To alleviate the sparsity and cold start problems, side information is incorporated into the embedding framework. We propose two aggregation methods to integrate the embeddings of items and the corresponding side information. Experimental results from offline experiments show that methods incorporating side information are superior to those that do not. Further, we describe the platform upon which the embedding methods are deployed and the workflow to process the billion-scale data in Taobao. Using online A/B test, we show that the online Click-Through-Rate (CTRs) are improved comparing to the previous recommendation methods widely used in Taobao, further demonstrating the effectiveness and feasibility of our proposed methods in Taobao's live production environment.</div></details></td>
5594 5595 5596
        <td>暂无</td>
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5597 5598
    </tr>
    <tr>
5599
        <td>16</td>
C
Chen Long 已提交
5600 5601 5602 5603
        <td>gat</td>
        <td><a href="https://paperswithcode.com/paper/graph-attention-networks">Graph Attention Networks</a></td>
        <td><details><summary>Abstract</summary><div>We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).</div></details></td>
        <td>Acc</td>
5604 5605
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5606 5607
    </tr>
    <tr>
5608
        <td>17</td>
C
Chen Long 已提交
5609
        <td>deepwalk</td>
5610
        <td><a href="https://paperswithcode.com/paper/deepwalk-online-learning-of-social">DeepWalk: Online Learning of Social Representations</a></td>
C
Chen Long 已提交
5611 5612
        <td><details><summary>Abstract</summary><div>We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10% higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60% less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.</div></details></td>
        <td>MacroF1</td>
5613 5614
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5615 5616
    </tr>
    <tr>
5617
        <td>18</td>
C
Chen Long 已提交
5618
        <td>MAG240M</td>
5619
        <td><a href="https://paperswithcode.com/paper/masked-label-prediction-unified-massage">Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification</a></td>
C
Chen Long 已提交
5620 5621
        <td><details><summary>Abstract</summary><div>Graph neural network (GNN) and label propagation algorithm (LPA) are both message passing algorithms, which have achieved superior performance in semi-supervised classification. GNN performs feature propagation by a neural network to make predictions, while LPA uses label propagation across graph adjacency matrix to get results. However, there is still no effective way to directly combine these two kinds of algorithms. To address this issue, we propose a novel Unified Message Passaging Model (UniMP) that can incorporate feature and label propagation at both training and inference time. First, UniMP adopts a Graph Transformer network, taking feature embedding and label embedding as input information for propagation. Second, to train the network without overfitting in self-loop input label information, UniMP introduces a masked label prediction strategy, in which some percentage of input label information are masked at random, and then predicted. UniMP conceptually unifies feature propagation and label propagation and is empirically powerful. It obtains new state-of-the-art semi-supervised classification results in Open Graph Benchmark (OGB).</div></details></td>
        <td>Acc</td>
5622 5623
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5624 5625
    </tr>
    <tr>
5626
        <td>19</td>
C
Chen Long 已提交
5627
        <td>lightgcn</td>
5628
        <td><a href="https://paperswithcode.com/paper/lightgcn-simplifying-and-powering-graph">LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation</a></td>
C
Chen Long 已提交
5629
        <td><details><summary>Abstract</summary><div>Graph Convolution Network (GCN) has become new state-of-the-art for collaborative filtering. Nevertheless, the reasons of its effectiveness for recommendation are not well understood. Existing work that adapts GCN to recommendation lacks thorough ablation analyses on GCN, which is originally designed for graph classification tasks and equipped with many neural network operations. However, we empirically find that the two most common designs in GCNs -- feature transformation and nonlinear activation -- contribute little to the performance of collaborative filtering. Even worse, including them adds to the difficulty of training and degrades recommendation performance.In this work, we aim to simplify the design of GCN to make it more concise and appropriate for recommendation. We propose a new model named LightGCN, including only the most essential component in GCN -- neighborhood aggregation -- for collaborative filtering. Specifically, LightGCN learns user and item embeddings by linearly propagating them on the user-item interaction graph, and uses the weighted sum of the embeddings learned at all layers as the final embedding. Such simple, linear, and neat model is much easier to implement and train, exhibiting substantial improvements (about 16.0\% relative improvement on average) over Neural Graph Collaborative Filtering (NGCF) -- a state-of-the-art GCN-based recommender model -- under exactly the same experimental setting. Further analyses are provided towards the rationality of the simple LightGCN from both analytical and empirical perspectives</div></details></td>
5630 5631 5632
        <td>暂无</td>
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5633 5634
    </tr>
    <tr>
5635
        <td>20</td>
C
Chen Long 已提交
5636
        <td>ngcf</td>
5637
        <td><a href="https://paperswithcode.com/paper/neural-graph-collaborative-filtering">Neural Graph Collaborative Filtering</a></td>
C
Chen Long 已提交
5638
        <td><details><summary>Abstract</summary><div>Learning vector representations (aka. embeddings) of users and items lies at the core of modern recommender systems. Ranging from early matrix factorization to recently emerged deep learning based methods, existing efforts typically obtain a user's (or an item's) embedding by mapping from pre-existing features that describe the user (or the item), such as ID and attributes. We argue that an inherent drawback of such methods is that, the collaborative signal, which is latent in user-item interactions, is not encoded in the embedding process. As such, the resultant embeddings may not be sufficient to capture the collaborative filtering effect.In this work, we propose to integrate the user-item interactions -- more specifically the bipartite graph structure -- into the embedding process. We develop a new recommendation framework Neural Graph Collaborative Filtering (NGCF), which exploits the user-item graph structure by propagating embeddings on it. This leads to the expressive modeling of high-order connectivity in user-item graph, effectively injecting the collaborative signal into the embedding process in an explicit manner. We conduct extensive experiments on three public benchmarks, demonstrating significant improvements over several state-of-the-art models like HOP-Rec and Collaborative Memory Network. Further analysis verifies the importance of embedding propagation for learning better user and item representations, justifying the rationality and effectiveness of NGCF. Codes are available at this https URL.</div></details></td>
5639 5640 5641
        <td>暂无</td>
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5642 5643
    </tr>
    <tr>
5644
        <td>21</td>
C
Chen Long 已提交
5645
        <td>rgcn</td>
5646
        <td><a href="https://paperswithcode.com/paper/modeling-relational-data-with-graph">Modeling Relational Data with Graph Convolutional Networks</a></td>
C
Chen Long 已提交
5647 5648
        <td><details><summary>Abstract</summary><div>Knowledge graphs enable a wide variety of applications, including question answering and information retrieval. Despite the great effort invested in their creation and maintenance, even the largest (e.g., Yago, DBPedia or Wikidata) remain incomplete. We introduce Relational Graph Convolutional Networks (R-GCNs) and apply them to two standard knowledge base completion tasks: Link prediction (recovery of missing facts, i.e. subject-predicate-object triples) and entity classification (recovery of missing entity attributes). R-GCNs are related to a recent class of neural networks operating on graphs, and are developed specifically to deal with the highly multi-relational data characteristic of realistic knowledge bases. We demonstrate the effectiveness of R-GCNs as a stand-alone model for entity classification. We further show that factorization models for link prediction such as DistMult can be significantly improved by enriching them with an encoder model to accumulate evidence over multiple inference steps in the relational graph, demonstrating a large improvement of 29.8% on FB15k-237 over a decoder-only baseline.</div></details></td>
        <td>Acc</td>
5649 5650
        <td><a href="">快速开始</a></td>
        </td>
C
Chen Long 已提交
5651 5652
    </tr>
    <tr>
5653
        <td>22</td>
C
Chen Long 已提交
5654
        <td>ssgc</td>
5655
        <td><a href="https://paperswithcode.com/paper/simple-spectral-graph-convolution">Simple Spectral Graph Convolution</a></td>
C
Chen Long 已提交
5656 5657
        <td><details><summary>Abstract</summary><div>Graph Convolutional Networks (GCNs) are leading methods for learning graph representations. However, without specially designed architectures, the performance of GCNs degrades quickly with increased depth. As the aggregated neighborhood size and neural network depth are two completely orthogonal aspects of graph representation, several methods focus on summarizing the neighborhood by aggregating K-hop neighborhoods of nodes while using shallow neural networks. However, these methods still encounter oversmoothing, and suffer from high computation and storage costs. In this paper, we use a modified Markov Diffusion Kernel to derive a variant of GCN called Simple Spectral Graph Convolution (SSGC). Our spectral analysis shows that our simple spectral graph convolution used in SSGC is a trade-off of low- and high-pass filter bands which capture the global and local contexts of each node. We provide two theoretical claims which demonstrate that we can aggregate over a sequence of increasingly larger neighborhoods compared to competitors while limiting severe oversmoothing.  Our experimental evaluations show that SSGC with a linear learner is competitive in text and node classification tasks. Moreover, SSGC is comparable to other state-of-the-art methods for node clustering and community prediction tasks.</div></details></td>
        <td>Acc</td>
5658 5659 5660 5661 5662 5663 5664 5665 5666 5667 5668 5669 5670 5671 5672 5673 5674 5675 5676 5677 5678 5679 5680 5681 5682 5683 5684 5685 5686 5687 5688 5689 5690 5691 5692 5693 5694 5695
        <td><a href="">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>23</td>
        <td>GMT</td>
        <td><a href="https://paperswithcode.com/paper/accurate-learning-of-graph-representations-1">Accurate Learning of Graph Representations with Graph Multiset Pooling</a></td>
        <td><details><summary>Abstract</summary><div>Graph neural networks have been widely used on modeling graph data, achieving impressive results on node classification and link prediction tasks. Yet, obtaining an accurate representation for a graph further requires a pooling function that maps a set of node representations into a compact form. A simple sum or average over all node representations considers all node features equally without consideration of their task relevance, and any structural dependencies among them. Recently proposed hierarchical graph pooling methods, on the other hand, may yield the same representation for two different graphs that are distinguished by the Weisfeiler-Lehman test, as they suboptimally preserve information from the node features. To tackle these limitations of existing graph pooling methods, we first formulate the graph pooling problem as a multiset encoding problem with auxiliary information about the graph structure, and propose a Graph Multiset Transformer (GMT) which is a multi-head attention based global pooling layer that captures the interaction between nodes according to their structural dependencies. We show that GMT satisfies both injectiveness and permutation invariance, such that it is at most as powerful as the Weisfeiler-Lehman graph isomorphism test. Moreover, our methods can be easily extended to the previous node clustering approaches for hierarchical graph pooling. Our experimental results show that GMT significantly outperforms state-of-the-art graph pooling methods on graph classification benchmarks with high memory and time efficiency, and obtains even larger performance gain on graph reconstruction and generation tasks.</div></details></td>
        <td>Acc</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>24</td>
        <td>Set2Set</td>
        <td><a href="https://paperswithcode.com/paper/order-matters-sequence-to-sequence-for-sets">Order Matters: Sequence to sequence for sets</a></td>
        <td><details><summary>Abstract</summary><div>Sequences have become first class citizens in supervised learning thanks to the resurgence of recurrent neural networks. Many complex tasks that require mapping from or to a sequence of observations can now be formulated with the sequence-to-sequence (seq2seq) framework which employs the chain rule to efficiently represent the joint probability of sequences. In many cases, however, variable sized inputs and/or outputs might not be naturally expressed as sequences. For instance, it is not clear how to input a set of numbers into a model where the task is to sort them; similarly, we do not know how to organize outputs when they correspond to random variables and the task is to model their unknown joint probability. In this paper, we first show using various examples that the order in which we organize input and/or output data matters significantly when learning an underlying model. We then discuss an extension of the seq2seq framework that goes beyond sequences and handles input sets in a principled way. In addition, we propose a loss which, by searching over possible orders during training, deals with the lack of structure of output sets. We show empirical evidence of our claims regarding ordering, and on the modifications to the seq2seq framework on benchmark language modeling and parsing tasks, as well as two artificial tasks -- sorting numbers and estimating the joint probability of unknown graphical models.</div></details></td>
        <td>Acc</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>25</td>
        <td>gPool</td>
        <td><a href="https://paperswithcode.com/paper/gated-graph-sequence-neural-networks">Gated Graph Sequence Neural Networks</a></td>
        <td><details><summary>Abstract</summary><div>Graph-structured data appears frequently in domains including chemistry, natural language semantics, social networks, and knowledge bases. In this work, we study feature learning techniques for graph-structured inputs. Our starting point is previous work on Graph Neural Networks (Scarselli et al., 2009), which we modify to use gated recurrent units and modern optimization techniques and then extend to output sequences. The result is a flexible and broadly useful class of neural network models that has favorable inductive biases relative to purely sequence-based models (e.g., LSTMs) when the problem is graph-structured. We demonstrate the capabilities on some simple AI (bAbI) and graph algorithm learning tasks. We then show it achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.</div></details></td>
        <td>Acc</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
    </tr>
    <tr>
        <td>26</td>
        <td>GPR</td>
        <td><a href="https://paperswithcode.com/paper/joint-adaptive-feature-smoothing-and-topology">Adaptive Universal Generalized PageRank Graph Neural Network</a></td>
        <td><details><summary>Abstract</summary><div>In many important graph data processing applications the acquired information includes both node features and observations of the graph topology. Graph neural networks (GNNs) are designed to exploit both sources of evidence but they do not optimally trade-off their utility and integrate them in a manner that is also universal. Here, universality refers to independence on homophily or heterophily graph assumptions. We address these issues by introducing a new Generalized PageRank (GPR) GNN architecture that adaptively learns the GPR weights so as to jointly optimize node feature and topological information extraction, regardless of the extent to which the node labels are homophilic or heterophilic. Learned GPR weights automatically adjust to the node label pattern, irrelevant on the type of initialization, and thereby guarantee excellent learning performance for label patterns that are usually hard to handle. Furthermore, they allow one to avoid feature over-smoothing, a process which renders feature information nondiscriminative, without requiring the network to be shallow. Our accompanying theoretical analysis of the GPR-GNN method is facilitated by novel synthetic benchmark datasets generated by the so-called contextual stochastic block model. We also compare the performance of our GNN architecture with that of several state-of-the-art GNNs on the problem of node-classification, using well-known benchmark homophilic and heterophilic datasets. The results demonstrate that GPR-GNN offers significant performance improvement compared to existing techniques on both synthetic and benchmark data.</div></details></td>
        <td>Acc</td>
        <td><a href="暂无">快速开始</a></td>
        </td>
C
Chen Long 已提交
5696 5697
    </tr>
</table>