README.md 4.5 KB
Newer Older
GitCode官方's avatar
GitCode官方 已提交
1 2
# 数据集

M
MaoXianxin 已提交
3
**定期补充经典的数据集,喜欢的话帮我点个star哈**
M
MaoXianxin 已提交
4

M
MaoXianxin 已提交
5 6 7
## 分类数据集

### 猫狗数据集
M
MaoXianxin 已提交
8

M
MaoXianxin 已提交
9
![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603150816.png)
M
MaoXianxin 已提交
10

M
MaoXianxin 已提交
11
链接:https://pan.baidu.com/s/1hESO4OI_i0FjmHnkbqt-Hw 
M
MaoXianxin 已提交
12 13
提取码:czla

M
MaoXianxin 已提交
14
### Imagenette数据集
M
MaoXianxin 已提交
15

M
MaoXianxin 已提交
16
![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603150841.png)
17

M
MaoXianxin 已提交
18
![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603150859.png)
19

M
MaoXianxin 已提交
20
链接:https://pan.baidu.com/s/1D8ECW0G-C4mXpC7h6kYsMA 
M
MaoXianxin 已提交
21 22
提取码:a45w

M
MaoXianxin 已提交
23
### 自然场景分类数据集
M
MaoXianxin 已提交
24

M
MaoXianxin 已提交
25
![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603150915.png)
26

M
MaoXianxin 已提交
27
![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603150933.png)
28

M
MaoXianxin 已提交
29
链接:https://pan.baidu.com/s/1YudPaWyzd0NUloePDpW8oA 
M
MaoXianxin 已提交
30 31
提取码:ipjp

M
MaoXianxin 已提交
32
### Food-101数据集
M
MaoXianxin 已提交
33

M
MaoXianxin 已提交
34
![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603150950.png)
M
MaoXianxin 已提交
35

M
MaoXianxin 已提交
36
![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603151006.png)
M
MaoXianxin 已提交
37

M
MaoXianxin 已提交
38 39 40
链接:https://pan.baidu.com/s/1iouChBMnfSg63qC2u7PU_g 
提取码:396f

M
MaoXianxin 已提交
41
### Fashion MNIST数据集
M
MaoXianxin 已提交
42

M
MaoXianxin 已提交
43
![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603151031.png)
M
MaoXianxin 已提交
44

M
MaoXianxin 已提交
45
链接:https://pan.baidu.com/s/1rOXZ9IANyaIopsQr9gDpmQ 
M
MaoXianxin 已提交
46 47 48 49
提取码:rnf3

### MINC-2500数据集

M
MaoXianxin 已提交
50
![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603151048.png)
M
MaoXianxin 已提交
51

M
MaoXianxin 已提交
52
![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603151106.png)
M
MaoXianxin 已提交
53

M
MaoXianxin 已提交
54
链接:https://pan.baidu.com/s/1Tjhb3hEClFAPWUz-0gfZRQ 
M
MaoXianxin 已提交
55 56
提取码:qtsa

M
MaoXianxin 已提交
57 58
### CIFAR-10数据集

M
MaoXianxin 已提交
59
![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603151121.png)
M
MaoXianxin 已提交
60

M
MaoXianxin 已提交
61 62 63
链接:https://pan.baidu.com/s/15LQPvcW0EkEEjN_2Lu2T3g 
提取码:956t

64 65 66 67 68 69
# 论文

## 3D AffordanceNet: A Benchmark for Visual Object Affordance Understanding

The ability to understand the ways to interact with objects from visual cues, a.k.a. visual affordance, is essential to vision-guided robotic research. This involves categorizing, segmenting and reasoning of visual affordance. Relevant studies in 2D and 2.5D image domains have been made previously, however, a truly functional understanding of object affordance requires learning and prediction in the 3D physical domain, which is still absent in the community. In this work, we present a 3D AffordanceNet dataset, a benchmark of 23k shapes from 23 semantic object categories, annotated with 18 visual affordance categories. Based on this dataset, we provide three benchmarking tasks for evaluating visual affordance understanding, including full-shape, partial-view and rotation-invariant affordance estimations. Three state-of-the-art point cloud deep learning networks are evaluated on all tasks. In addition we also investigate a semi-supervised learning setup to explore the possibility to benefit from unlabeled data. Comprehensive results on our contributed dataset show the promise of visual affordance understanding as a valuable yet challenging benchmark.

70 71 72 73
## ACTION-Net: Multipath Excitation for Action Recognition

Spatial-temporal, channel-wise, and motion patterns are three complementary and crucial types of information for video action recognition. Conventional 2D CNNs are computationally cheap but cannot catch temporal relationships; 3D CNNs can achieve good performance but are computationally intensive. In this work, we tackle this dilemma by designing a generic and effective module that can be embedded into 2D CNNs. To this end, we propose a spAtiotemporal, Channel and moTion excitatION (ACTION) module consisting of three paths: Spatio-Temporal Excitation (STE) path, Channel Excitation (CE) path, and Motion Excitation (ME) path. The STE path employs one channel 3D convolution to characterize spatio-temporal representation. The CE path adaptively recalibrates channel-wise feature responses by explicitly modeling interdependencies between channels in terms of the temporal aspect. The ME path calculates feature-level temporal differences, which is then utilized to excite motion-sensitive channels. We equip 2D CNNs with the proposed ACTION module to form a simple yet effective ACTION-Net with very limited extra computational cost. ACTION-Net is demonstrated by consistently outperforming 2D CNN counterparts on three backbones (i.e., ResNet-50, MobileNet V2 and BNInception) employing three datasets (i.e., Something-Something V2, Jester, and EgoGesture). Codes are available at [https://github.com/V-Sense/ACTION-Net](https://github.com/V-Sense/ACTION-Net).

74
![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210608112105.png)