Linfeng Zhang (张林峰)
Linfeng Zhang (张林峰)
Other namesL. Zhang, 张林峰, 林峰 张
Verified email at
Cited by
Cited by
Be your own teacher: Improve the performance of convolutional neural networks via self distillation
L Zhang, J Song, A Gao, J Chen, C Bao, K Ma
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2019
Improve Object Detection with Feature-based Knowledge Distillation: Towards Accurate and Efficient Detectors
L Zhang, K Ma
The Ninth International Conference on Learning Representations (ICLR2021), 2021
SCAN: A scalable neural networks framework towards compact and efficient models
L Zhang, Z Tan, J Song, J Chen, C Bao, K Ma
NeurIPS2019, 2019
Self-distillation: Towards efficient and compact neural networks
L Zhang, C Bao, K Ma
IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (8), 4388-4403, 2021
Non-structured DNN weight pruning—Is it beneficial in any platform?
X Ma, S Lin, S Ye, Z He, L Zhang, G Yuan, SH Tan, Z Li, D Fan, X Qian, ...
IEEE transactions on neural networks and learning systems 33 (9), 4930-4944, 2021
Fine-grained emotion classification of Chinese microblogs based on graph convolution networks
Y Lai, L Zhang, D Han, R Zhou, G Wang
World Wide Web 23, 2771-2787, 2020
StructADMM: A systematic, high-efficiency framework of structured weight pruning for DNNs
T Zhang, S Ye, K Zhang, X Ma, N Liu, L Zhang, J Tang, K Ma, X Lin, ...
arXiv preprint arXiv:1807.11091, 2018
Auxiliary training: Towards accurate and robust models
L Zhang, M Yu, T Chen, Z Shi, C Bao, K Ma
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2020
Task-oriented feature distillation
L Zhang, Y Shi, Z Shi, K Ma, C Bao
NeurIPS2020 33, 14759-14771, 2020
Wavelet Knowledge Distillation: Towards Efficient Image-to-Image Translation
L Zhang, X Chen, X Tu, P Wan, N Xu, K Ma
Proceedings of the IEEE/CVF conference on Computer Vision and Pattern …, 2022
Non-structured dnn weight pruning considered harmful
Y Wang, S Ye, Z He, X Ma, L Zhang, S Lin, G Yuan, SH Tan, Z Li, D Fan, ...
arXiv preprint arXiv:1907.02124 2, 2019
Pointdistiller: structured knowledge distillation towards efficient and compact 3d detection
L Zhang, R Dong, HS Tai, K Ma
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR2023), 2022
Contrastive Deep Supervision
L Zhang, X Chen, J Zhang, R Dong, K Ma
European Conference on Computer Vision (ECCV2022), 2022
Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image Transformers Help 3D Representation Learning?
R Dong, Z Qi, L Zhang, J Zhang, J Sun, Z Ge, L Yi, K Ma
arXiv preprint arXiv:2212.08320, 2022
Region-aware knowledge distillation for efficient image-to-image translation
L Zhang, X Chen, R Dong, K Ma
arXiv preprint arXiv:2205.12451, 2022
Finding the Task-Optimal Low-Bit Sub-Distribution in Deep Neural Networks
R Dong, Z Tan, M Wu, L Zhang, K Ma
International Conference on Machine Learning (ICML2022), 2021
SMART: screen-based gesture recognition on commodity mobile devices
Z Liao, Z Luo, Q Huang, L Zhang, F Wu, Q Zhang, Y Wang
Proceedings of the 27th Annual International Conference on Mobile Computing …, 2021
Structured Knowledge Distillation Towards Efficient and Compact Multi-View 3D Detection
L Zhang, Y Shi, HS Tai, Z Zhang, Y He, K Wang, K Ma
arXiv preprint arXiv:2211.08398, 2022
A Good Data Augmentation Policy Is Not All You Need: A Multi-Task Learning Perspective
L Zhang, K Ma
IEEE Transactions on Circuits and Systems for Video Technology, 2022
Wavelet J-Net: A Frequency Perspective on Convolutional Neural Networks
L Zhang, X Zhang, C Bao, K Ma
2021 International Joint Conference on Neural Networks (IJCNN), 1-8, 2021
The system can't perform the operation now. Try again later.
Articles 1–20