5. From Dense to Sparse: Contrastive Pruning for Better Pre-Trained Language Model Compression
6. Prune and Tune Ensembles: Low-Cost Ensemble Learning with Sparse Independent Subnetworks
### Others
1. BATUDE: Budget-Aware Neural Network Compression Based on Tucker Decomposition
2. Convolutional Neural Network Compression Through Generalized Kronecker Product Decomposition
## ICLR
### Knowledge Distillation
1. Churn Reduction via Distillation Heinrich Jiang, Harikrishna Narasimhan, Dara Bahri, Andrew Cotter, Afshin Rostamizadeh29 Sept 2021 (modified: 15 Mar 2022) ICLR 2022 Spotlight Readers: EveryoneShow details
2. Progressive Distillation for Fast Sampling of Diffusion Models Tim Salimans, Jonathan Ho29 Sept 2021 (modified: 15 Mar 2022) ICLR 2022 Spotlight Readers: EveryoneShow details
3. Online Hyperparameter Meta-Learning with Hypergradient Distillation
4. Improving Non-Autoregressive Translation Models Without Distillation Xiao Shi Huang, Felipe Perez, Maksims Volkovs29 Sept 2021 (modified: 12 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
5. Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations Fangyu Liu, Yunlong Jiao, Jordan Massiah, Emine Yilmaz, Serhii Havrylov29 Sept 2021 (modified: 14 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
6. Feature Kernel Distillation Bobby He, Mete Ozay29 Sept 2021 (modified: 06 May 2022) ICLR 2022 Poster Readers: EveryoneShow details
7. Graph-less Neural Networks: Teaching Old MLPs New Tricks Via Distillation Shichang Zhang, Yozen Liu, Yizhou Sun, Neil Shah29 Sept 2021 (modified: 13 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
8. Reliable Adversarial Distillation with Unreliable Teachers Jianing Zhu, Jiangchao Yao, Bo Han, Jingfeng Zhang, Tongliang Liu, Gang Niu, Jingren Zhou, Jianliang Xu, Hongxia Yang29 Sept 2021 (modified: 10 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
9. Towards Model Agnostic Federated Learning Using Knowledge Distillation Andrei Afonin, Sai Praneeth Karimireddy29 Sept 2021 (modified: 11 May 2022) ICLR 2022 Poster Readers: EveryoneShow details
12. Data Efficient Language-Supervised Zero-Shot Recognition with Optimal Transport Distillation Bichen Wu, Ruizhe Cheng, Peizhao Zhang, Tianren Gao, Joseph E. Gonzalez, Peter Vajda29 Sept 2021 (modified: 15 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
13. OBJECT DYNAMICS DISTILLATION FOR SCENE DECOMPOSITION AND REPRESENTATION Qu Tang, Xiangyu Zhu, Zhen Lei, Zhaoxiang Zhang29 Sept 2021 (modified: 09 May 2022) ICLR 2022 Poster Readers: EveryoneShow details
14. Open-vocabulary Object Detection via Vision and Language Knowledge Distillation Xiuye Gu, Tsung-Yi Lin, Weicheng Kuo, Yin Cui29 Sept 2021 (modified: 16 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
16. Better Supervisory Signals by Observing Learning Paths Yi Ren, Shangmin Guo, Danica J. Sutherland29 Sept 2021 (modified: 05 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
17. Distilling GANs with Style-Mixed Triplets for X2I Translation with Limited Data Yaxing Wang, Joost van de weijer, Lu Yu, SHANGLING JUI29 Sept 2021 (modified: 15 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
18. Image BERT Pre-training with Online Tokenizer Jinghao Zhou, Chen Wei, Huiyu Wang, Wei Shen, Cihang Xie, Alan Yuille, Tao Kong29 Sept 2021 (modified: 15 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
19. Spatial Graph Attention and Curiosity-driven Policy for Antiviral Drug Discovery Yulun Wu, Nicholas Choma, Andrew Deru Chen, Mikaela Cashman, Erica Teixeira Prates, Veronica G Melesse Vergara, Manesh B Shah, Austin Clyde, Thomas Brettin, Wibe Albert de Jong, Neeraj Kumar, Martha S Head, Rick L. Stevens, Peter Nugent, Daniel A Jacobson, James B Brown29 Sept 2021 (modified: 10 May 2022) ICLR 2022 Poster Readers: EveryoneShow details
20. Cold Brew: Distilling Graph Node Representations with Incomplete or Missing Neighborhoods Wenqing Zheng, Edward W Huang, Nikhil Rao, Sumeet Katariya, Zhangyang Wang, Karthik Subbian29 Sept 2021 (modified: 12 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
21. Reinforcement Learning in Presence of Discrete Markovian Context Evolution Hang Ren, Aivar Sootla, Taher Jafferjee, Junxiao Shen, Jun Wang, Haitham Bou Ammar29 Sept 2021 (modified: 12 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
23. Out-of-distribution Generalization in the Presence of Nuisance-Induced Spurious Correlations Aahlad Manas Puli, Lily H Zhang, Eric Karl Oermann, Rajesh Ranganath29 Sept 2021 (modified: 11 May 2022) ICLR 2022 Poster Readers: EveryoneShow details
24. Learning Efficient Image Super-Resolution Networks via Structure-Regularized Pruning Yulun Zhang, Huan Wang, Can Qin, Yun Fu29 Sept 2021 (modified: 15 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
25. BiBERT: Accurate Fully Binarized BERT
### Quantization
1. F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization
13. Graph-less Neural Networks: Teaching Old MLPs New Tricks Via Distillation Shichang Zhang, Yozen Liu, Yizhou Sun, Neil Shah
### Pruning
1. On Lottery Tickets and Minimal Task Representations in Deep Reinforcement Learning Marc Vischer, Robert Tjarko Lange, Henning Sprekeler29 Sept 2021 (modified: 11 May 2022) ICLR 2022 Spotlight Readers: EveryoneShow details
2. SOSP: Efficiently Capturing Global Correlations by Second-Order Structured Pruning Manuel Nonnenmacher, Thomas Pfeil, Ingo Steinwart, David Reeb29 Sept 2021 (modified: 16 Mar 2022) ICLR 2022 Spotlight Readers: EveryoneShow details
3. Possibility Before Utility: Learning And Using Hierarchical Affordances Robby Costales, Shariq Iqbal, Fei Sha29 Sept 2021 (modified: 15 Mar 2022) ICLR 2022 Spotlight Readers: EveryoneShow details
4. Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, And No Retraining Lu Miao, Xiaolong Luo, Tianlong Chen, Wuyang Chen, Dong Liu, Zhangyang Wan
5. Effective Model Sparsification by Scheduled Grow-and-Prune Methods Xiaolong Ma, Minghai Qin, Fei Sun, Zejiang Hou, Kun Yuan, Yi Xu, Yanzhi Wang, Yen-Kuang Chen, Rong Jin, Yuan Xie29 Sept 2021 (modified: 10 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
7. The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training Shiwei Liu, Tianlong Chen, Xiaohan Chen, Li Shen, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy29 Sept 2021 (modified: 13 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
8. Learning Efficient Image Super-Resolution Networks via Structure-Regularized Pruning Yulun Zhang, Huan Wang, Can Qin, Yun Fu29 Sept 2021 (modified: 15 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
9. Encoding Weights of Irregular Sparsity for Fixed-to-Fixed Model Compression Bae Seong Park, Se Jung Kwon, Daehwan Oh, Byeongwook Kim, Dongsoo Lee29 Sept 2021 (modified: 31 Jan 2022) ICLR 2022 Poster Readers: EveryoneShow details
10. No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models Chen Liang, Haoming Jiang, Simiao Zuo, Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, Tuo Zhao29 Sept 2021 (modified: 16 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
11. Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients Milad Alizadeh, Shyam A. Tailor, Luisa M Zintgraf, Joost van Amersfoort, Sebastian Farquhar, Nicholas Donald Lane, Yarin Gal
12. Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients Milad Alizadeh, Shyam A. Tailor, Luisa M Zintgraf, Joost van Amersfoort, Sebastian Farquhar, Nicholas Donald Lane, Yarin Gal29 Sept 2021 (modified: 16 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
13. An Operator Theoretic View On Pruning Deep Neural Networks William T Redman, MARIA FONOBEROVA, Ryan Mohr, Yannis Kevrekidis, Igor Mezic29 Sept 2021 (modified: 13 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
15. Learning Efficient Image Super-Resolution Networks via Structure-Regularized Pruning Yulun Zhang, Huan Wang, Can Qin, Yun Fu29 Sept 2021 (modified: 15 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
16. The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training Shiwei Liu, Tianlong Chen, Xiaohan Chen, Li Shen, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy29 Sept 2021 (modified: 13 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
18. Encoding Weights of Irregular Sparsity for Fixed-to-Fixed Model Compression Bae Seong Park, Se Jung Kwon, Daehwan Oh, Byeongwook Kim, Dongsoo Lee29 Sept 2021 (modified: 31 Jan 2022) ICLR 2022 Poster Readers: EveryoneShow details
19. Signing the Supermask: Keep, Hide, Invert Nils Koster, Oliver Grothe, Achim Rettinger29 Sept 2021 (modified: 14 Feb 2022) ICLR 2022 Poster Readers: EveryoneShow details
20. Plant 'n' Seek: Can You Find the Winning Ticket? Jonas Fischer, Rebekka Burkholz29 Sept 2021 (modified: 06 Apr 2022) ICLR 2022 Poster Readers: EveryoneShow details
21. Effective Model Sparsification by Scheduled Grow-and-Prune Methods Xiaolong Ma, Minghai Qin, Fei Sun, Zejiang Hou, Kun Yuan, Yi Xu, Yanzhi Wang, Yen-Kuang Chen, Rong Jin, Yuan Xie29 Sept 2021 (modified: 10 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
22. Proving the Lottery Ticket Hypothesis for Convolutional Neural Networks Arthur da Cunha, Emanuele Natale, Laurent Viennot29 Sept 2021 (modified: 15 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
23. On the Existence of Universal Lottery Tickets Rebekka Burkholz, Nilanjana Laha, Rajarshi Mukherjee, Alkis Gotovos29 Sept 2021 (modified: 16 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
24. Training Structured Neural Networks Through Manifold Identification and Variance Reduction Zih-Syuan Huang, Ching-pei Lee29 Sept 2021 (modified: 03 May 2022) ICLR 2022 Poster Readers: EveryoneShow details
25. PF-GNN: Differentiable particle filtering based approximation of universal graph representations Mohammed Haroon Dupty, Yanfei Dong, Wee Sun Lee29 Sept 2021 (modified: 13 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
26. Graph-less Neural Networks: Teaching Old MLPs New Tricks Via Distillation Shichang Zhang, Yozen Liu, Yizhou Sun, Neil Shah29 Sept 2021 (modified: 13 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
27. How many degrees of freedom do we need to train deep networks: a loss landscape perspective Brett W Larsen, Stanislav Fort, Nic Becker, Surya Ganguli29 Sept 2021 (modified: 11 May 2022) ICLR 2022 Poster Readers: EveryoneShow details
29. Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently Xiaohan Chen, Jason Zhang, Zhangyang Wang
### Others
1. DKM: Differentiable k-Means Clustering Layer for Neural Network Compression Minsik Cho, Keivan Alizadeh-Vahid, Saurabh Adya, Mohammad Rastegari29 Sept 2021 (modified: 21 Feb 2022) ICLR 2022 Poster Readers: EveryoneShow details
4. Encoding Weights of Irregular Sparsity for Fixed-to-Fixed Model Compression Bae Seong Park, Se Jung Kwon, Daehwan Oh, Byeongwook Kim, Dongsoo Lee29 Sept 2021 (modified: 31 Jan 2022) ICLR 2022 Poster Readers: EveryoneShow details
5. LOSSY COMPRESSION WITH DISTRIBUTION SHIFT AS ENTROPY CONSTRAINED OPTIMAL TRANSPORT Huan Liu, George Zhang, Jun Chen, Ashish J Khisti29 Sept 2021 (modified: 16 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
6. Memory Replay with Data Compression for Continual Learning Liyuan Wang, Xingxing Zhang, Kuo Yang, Longhui Yu, Chongxuan Li, Lanqing HONG, Shifeng Zhang, Zhenguo Li, Yi Zhong, Jun Zhu29 Sept 2021 (modified: 09 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
7. Entroformer: A Transformer-based Entropy Model for Learned Image Compression Yichen Qian, Xiuyu Sun, Ming Lin, Zhiyu Tan, Rong Jin29 Sept 2021 (modified: 14 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
8. Language model compression with weighted low-rank factorization Yen-Chang Hsu, Ting Hua, Sungen Chang, Qian Lou, Yilin Shen, Hongxia Jin29 Sept 2021 (modified: 11 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
9. On Distributed Adaptive Optimization with Gradient Compression Xiaoyun Li, Belhal Karimi, Ping Li29 Sept 2021 (modified: 09 May 2022) ICLR 2022 Poster Readers: EveryoneShow details
10. Distribution Compression in Near-Linear Time Abhishek Shetty, Raaz Dwivedi, Lester Mackey29 Sept 2021 (modified: 25 Jun 2022) ICLR 2022 Poster Readers: EveryoneShow details
11. Towards Empirical Sandwich Bounds on the Rate-Distortion Function Yibo Yang, Stephan Mandt29 Sept 2021 (modified: 11 May 2022) ICLR 2022 Poster Readers: EveryoneShow details
12. Information Bottleneck: Exact Analysis of (Quantized) Neural Networks Stephan Sloth Lorenzen, Christian Igel, Mads Nielsen29 Sept 2021 (modified: 16 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
15. Neural Network Approximation based on Hausdorff distance of Tropical Zonotopes Panagiotis Misiakos, Georgios Smyrnis, George Retsinas, Petros Maragos29 Sept 2021 (modified: 24 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
16. Autoregressive Diffusion Models Emiel Hoogeboom, Alexey A. Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, Tim Salimans29 Sept 2021 (modified: 10 Feb 2022) ICLR 2022 Poster Readers: EveryoneShow details
17. Learning Efficient Image Super-Resolution Networks via Structure-Regularized Pruning Yulun Zhang, Huan Wang, Can Qin, Yun Fu29 Sept 2021 (modified: 15 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details
18. Fast Generic Interaction Detection for Model Interpretability and Compression Tianjian Zhang, Feng Yin, Zhi-Quan Luo29 Sept 2021 (modified: 30 Jan 2022) ICLR 2022 Poster Readers: EveryoneShow details
19. EXACT: Scalable Graph Neural Networks Training via Extreme Activation Compression Zirui Liu, Kaixiong Zhou, Fan Yang, Li Li, Rui Chen, Xia Hu29 Sept 2021 (modified: 16 Mar 2022) ICLR 2022 Poster Readers: EveryoneShow details