README.md

    Swin Transformer

    PWC PWC PWC PWC

    By Ze Liu*, Yutong Lin*, Yue Cao*, Han Hu*, Yixuan Wei, Zheng Zhang, Stephen Lin and Baining Guo.

    This repo is the official implementation of "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows". It currently includes code and models for the following tasks:

    Image Classification: Included in this repo. See get_started.md for a quick start.

    Object Detection and Instance Segmentation: See Swin Transformer for Object Detection.

    Semantic Segmentation: See Swin Transformer for Semantic Segmentation.

    Video Action Recognition: See Video Swin Transformer.

    Semi-Supervised Object Detection: See Soft Teacher.

    SSL: Contrasitive Learning: See Transformer-SSL.

    🔥 SSL: Masked Image Modeling: See SimMIM.

    Updates

    03/02/2022

    News: Swin Transformer V2 and SimMIM got accepted by CVPR 2022. SimMIM is a self-supervised pre-training approach based on masked image modeling, a key technique that works out the 3-billion-parameter Swin V2 model using 40x less labelled data than that of previous billion-scale models based on JFT-3B.

    02/09/2022

    News: Integrated into Huggingface Spaces 🤗 using Gradio. Try out the Web Demo Hugging Face Spaces

    10/12/2021

    News: Swin Transformer received ICCV 2021 best paper award (Marr Prize).

    08/09/2021

    1. Soft Teacher will appear at ICCV2021. The code will be released at GitHub Repo. Soft Teacher is an end-to-end semi-supervisd object detection method, achieving a new record on the COCO test-dev: 61.3 box AP and 53.0 mask AP.

    07/03/2021

    1. Add Swin MLP, which is an adaption of Swin Transformer by replacing all multi-head self-attention (MHSA) blocks by MLP layers (more precisely it is a group linear layer). The shifted window configuration can also significantly improve the performance of vanilla MLP architectures.

    06/25/2021

    1. Video Swin Transformer is released at Video-Swin-Transformer. Video Swin Transformer achieves state-of-the-art accuracy on a broad range of video recognition benchmarks, including action recognition (84.9 top-1 accuracy on Kinetics-400 and 86.1 top-1 accuracy on Kinetics-600 with ~20x less pre-training data and ~3x smaller model size) and temporal modeling (69.6 top-1 accuracy on Something-Something v2).

    05/12/2021

    1. Used as a backbone for Self-Supervised Learning: Transformer-SSL

    Using Swin-Transformer as the backbone for self-supervised learning enables us to evaluate the transferring performance of the learnt representations on down-stream tasks, which is missing in previous works due to the use of ViT/DeiT, which has not been well tamed for down-stream tasks.

    04/12/2021

    Initial commits:

    1. Pretrained models on ImageNet-1K (Swin-T-IN1K, Swin-S-IN1K, Swin-B-IN1K) and ImageNet-22K (Swin-B-IN22K, Swin-L-IN22K) are provided.
    2. The supported code and models for ImageNet-1K image classification, COCO object detection and ADE20K semantic segmentation are provided.
    3. The cuda kernel implementation for the local relation layer is provided in branch LR-Net.

    Introduction

    Swin Transformer (the name Swin stands for Shifted window) is initially described in arxiv, which capably serves as a general-purpose backbone for computer vision. It is basically a hierarchical Transformer whose representation is computed with shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection.

    Swin Transformer achieves strong performance on COCO object detection (58.7 box AP and 51.1 mask AP on test-dev) and ADE20K semantic segmentation (53.5 mIoU on val), surpassing previous models by a large margin.

    teaser

    Main Results on ImageNet with Pretrained Models

    ImageNet-1K and ImageNet-22K Pretrained Models

    name pretrain resolution acc@1 acc@5 #params FLOPs FPS 22K model 1K model
    Swin-T ImageNet-1K 224x224 81.2 95.5 28M 4.5G 755 - github/baidu/config/log
    Swin-S ImageNet-1K 224x224 83.2 96.2 50M 8.7G 437 - github/baidu/config/log
    Swin-B ImageNet-1K 224x224 83.5 96.5 88M 15.4G 278 - github/baidu/config/log
    Swin-B ImageNet-1K 384x384 84.5 97.0 88M 47.1G 85 - github/baidu/config
    Swin-B ImageNet-22K 224x224 85.2 97.5 88M 15.4G 278 github/baidu github/baidu/config
    Swin-B ImageNet-22K 384x384 86.4 98.0 88M 47.1G 85 github/baidu github/baidu/config
    Swin-L ImageNet-22K 224x224 86.3 97.9 197M 34.5G 141 github/baidu github/baidu/config
    Swin-L ImageNet-22K 384x384 87.3 98.2 197M 103.9G 42 github/baidu github/baidu/config

    ImageNet-1K Pretrained Swin MLP Models

    name pretrain resolution acc@1 acc@5 #params FLOPs FPS 1K model
    Mixer-B/16 ImageNet-1K 224x224 76.4 - 59M 12.7G - official repo
    ResMLP-S24 ImageNet-1K 224x224 79.4 - 30M 6.0G 715 timm
    ResMLP-B24 ImageNet-1K 224x224 81.0 - 116M 23.0G 231 timm
    Swin-T/C24 ImageNet-1K 256x256 81.6 95.7 28M 5.9G 563 github/baidu/config
    SwinMLP-T/C24 ImageNet-1K 256x256 79.4 94.6 20M 4.0G 807 github/baidu/config
    SwinMLP-T/C12 ImageNet-1K 256x256 79.6 94.7 21M 4.0G 792 github/baidu/config
    SwinMLP-T/C6 ImageNet-1K 256x256 79.7 94.9 23M 4.0G 766 github/baidu/config
    SwinMLP-B ImageNet-1K 224x224 81.3 95.3 61M 10.4G 409 github/baidu/config

    Note: access code for baidu is swin. C24 means each head has 24 channels.

    Main Results on Downstream Tasks

    COCO Object Detection (2017 val)

    Backbone Method pretrain Lr Schd box mAP mask mAP #params FLOPs
    Swin-T Mask R-CNN ImageNet-1K 3x 46.0 41.6 48M 267G
    Swin-S Mask R-CNN ImageNet-1K 3x 48.5 43.3 69M 359G
    Swin-T Cascade Mask R-CNN ImageNet-1K 3x 50.4 43.7 86M 745G
    Swin-S Cascade Mask R-CNN ImageNet-1K 3x 51.9 45.0 107M 838G
    Swin-B Cascade Mask R-CNN ImageNet-1K 3x 51.9 45.0 145M 982G
    Swin-T RepPoints V2 ImageNet-1K 3x 50.0 - 45M 283G
    Swin-T Mask RepPoints V2 ImageNet-1K 3x 50.3 43.6 47M 292G
    Swin-B HTC++ ImageNet-22K 6x 56.4 49.1 160M 1043G
    Swin-L HTC++ ImageNet-22K 3x 57.1 49.5 284M 1470G
    Swin-L HTC++* ImageNet-22K 3x 58.0 50.4 284M -

    Note: * indicates multi-scale testing.

    ADE20K Semantic Segmentation (val)

    Backbone Method pretrain Crop Size Lr Schd mIoU mIoU (ms+flip) #params FLOPs
    Swin-T UPerNet ImageNet-1K 512x512 160K 44.51 45.81 60M 945G
    Swin-S UperNet ImageNet-1K 512x512 160K 47.64 49.47 81M 1038G
    Swin-B UperNet ImageNet-1K 512x512 160K 48.13 49.72 121M 1188G
    Swin-B UPerNet ImageNet-22K 640x640 160K 50.04 51.66 121M 1841G
    Swin-L UperNet ImageNet-22K 640x640 160K 52.05 53.53 234M 3230G

    Citing Swin Transformer

    @inproceedings{liu2021Swin,
      title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
      author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
      booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
      year={2021}
    }
    @inproceedings{liu2021swinv2,
      title={Swin Transformer V2: Scaling Up Capacity and Resolution}, 
      author={Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo},
      booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)},
      year={2022}
    }
    @inproceedings{xie2021simmim,
      title={SimMIM: A Simple Framework for Masked Image Modeling},
      author={Xie, Zhenda and Zhang, Zheng and Cao, Yue and Lin, Yutong and Bao, Jianmin and Yao, Zhuliang and Dai, Qi and Hu, Han},
      booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)},
      year={2022}
    }
    @inproceedings{hu2019local,
      title={Local Relation Networks for Image Recognition},
      author={Hu, Han and Zhang, Zheng and Xie, Zhenda and Lin, Stephen},
      booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
      pages={3464--3473},
      year={2019}
    }

    Getting Started

    Third-party Usage and Experiments

    In this pargraph, we cross link third-party repositories which use Swin and report results. You can let us know by raising an issue

    (Note please report accuracy numbers and provide trained models in your new repository to facilitate others to get sense of correctness and model behavior)

    [04/06/2022] Swin Transformer for Audio Classification: Hierarchical Token Semantic Audio Transformer.

    [12/21/2021] Swin Transformer for StyleGAN: StyleSwin

    [12/13/2021] Swin Transformer for Face Recognition: FaceX-Zoo

    [08/29/2021] Swin Transformer for Image Restoration: SwinIR

    [08/12/2021] Swin Transformer for person reID: https://github.com/layumi/Person_reID_baseline_pytorch

    [06/29/2021] Swin-Transformer in PaddleClas and inference based on whl package: https://github.com/PaddlePaddle/PaddleClas

    [04/14/2021] Swin for RetinaNet in Detectron: https://github.com/xiaohu2015/SwinT_detectron2.

    [04/16/2021] Included in a famous model zoo: https://github.com/rwightman/pytorch-image-models.

    [04/20/2021] Swin-Transformer classifier inference using TorchServe: https://github.com/kamalkraj/Swin-Transformer-Serve

    Contributing

    This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

    When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

    This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

    Trademarks

    This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

    项目简介

    This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".

    🚀 Github 镜像仓库 🚀

    源项目地址

    https://github.com/microsoft/Swin-Transformer

    发行版本

    当前项目没有发行版本

    贡献者 9

    开发语言

    • Python 100.0 %