# 数据集 **定期补充经典的数据集,喜欢的话帮我点个star哈** ## 分类数据集 ### 猫狗数据集 ![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603150816.png) 链接:https://pan.baidu.com/s/1hESO4OI_i0FjmHnkbqt-Hw 提取码:czla ### Imagenette数据集 ![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603150841.png) ![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603150859.png) 链接:https://pan.baidu.com/s/1D8ECW0G-C4mXpC7h6kYsMA 提取码:a45w ### 自然场景分类数据集 ![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603150915.png) ![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603150933.png) 链接:https://pan.baidu.com/s/1YudPaWyzd0NUloePDpW8oA 提取码:ipjp ### Food-101数据集 ![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603150950.png) ![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603151006.png) 链接:https://pan.baidu.com/s/1iouChBMnfSg63qC2u7PU_g 提取码:396f ### Fashion MNIST数据集 ![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603151031.png) 链接:https://pan.baidu.com/s/1rOXZ9IANyaIopsQr9gDpmQ 提取码:rnf3 ### MINC-2500数据集 ![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603151048.png) ![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603151106.png) 链接:https://pan.baidu.com/s/1Tjhb3hEClFAPWUz-0gfZRQ 提取码:qtsa ### CIFAR-10数据集 ![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210603151121.png) 链接:https://pan.baidu.com/s/15LQPvcW0EkEEjN_2Lu2T3g 提取码:956t # 论文 ## 3D AffordanceNet: A Benchmark for Visual Object Affordance Understanding The ability to understand the ways to interact with objects from visual cues, a.k.a. visual affordance, is essential to vision-guided robotic research. This involves categorizing, segmenting and reasoning of visual affordance. Relevant studies in 2D and 2.5D image domains have been made previously, however, a truly functional understanding of object affordance requires learning and prediction in the 3D physical domain, which is still absent in the community. In this work, we present a 3D AffordanceNet dataset, a benchmark of 23k shapes from 23 semantic object categories, annotated with 18 visual affordance categories. Based on this dataset, we provide three benchmarking tasks for evaluating visual affordance understanding, including full-shape, partial-view and rotation-invariant affordance estimations. Three state-of-the-art point cloud deep learning networks are evaluated on all tasks. In addition we also investigate a semi-supervised learning setup to explore the possibility to benefit from unlabeled data. Comprehensive results on our contributed dataset show the promise of visual affordance understanding as a valuable yet challenging benchmark. ## ACTION-Net: Multipath Excitation for Action Recognition Spatial-temporal, channel-wise, and motion patterns are three complementary and crucial types of information for video action recognition. Conventional 2D CNNs are computationally cheap but cannot catch temporal relationships; 3D CNNs can achieve good performance but are computationally intensive. In this work, we tackle this dilemma by designing a generic and effective module that can be embedded into 2D CNNs. To this end, we propose a spAtiotemporal, Channel and moTion excitatION (ACTION) module consisting of three paths: Spatio-Temporal Excitation (STE) path, Channel Excitation (CE) path, and Motion Excitation (ME) path. The STE path employs one channel 3D convolution to characterize spatio-temporal representation. The CE path adaptively recalibrates channel-wise feature responses by explicitly modeling interdependencies between channels in terms of the temporal aspect. The ME path calculates feature-level temporal differences, which is then utilized to excite motion-sensitive channels. We equip 2D CNNs with the proposed ACTION module to form a simple yet effective ACTION-Net with very limited extra computational cost. ACTION-Net is demonstrated by consistently outperforming 2D CNN counterparts on three backbones (i.e., ResNet-50, MobileNet V2 and BNInception) employing three datasets (i.e., Something-Something V2, Jester, and EgoGesture). Codes are available at [https://github.com/V-Sense/ACTION-Net](https://github.com/V-Sense/ACTION-Net). ![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210608112105.png)