Zhihang Yuan
Zhihang Yuan
Verified email at - Homepage
Cited by
Cited by
FPGA-based accelerator for long short-term memory recurrent neural networks
Y Guan, Z Yuan, G Sun, J Cong
2017 22nd Asia and South Pacific Design Automation Conference (ASP-DAC), 629-634, 2017
Ptq4vit: Post-training quantization for vision transformers with twin uniform quantization
Z Yuan, C Xue, Y Chen, Q Wu, G Sun
European conference on computer vision, 191-207, 2022
Post-training quantization on diffusion models
Y Shang, Z Yuan, B Xie, B Wu, Y Yan
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
Rptq: Reorder-based post-training quantization for large language models
Z Yuan, L Niu, J Liu, W Liu, X Wang, Y Shang, G Sun, Q Wu, J Wu, B Wu
arXiv preprint arXiv:2304.01089, 2023
S2dnas: Transforming static cnn model for dynamic inference via neural architecture search
Z Yuan, B Wu, G Sun, Z Liang, S Zhao, W Bi
Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23 …, 2020
Pd-quant: Post-training quantization based on prediction difference metric
J Liu, L Niu, Z Yuan, D Yang, X Wang, W Liu
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
Reducing overfitting in deep convolutional neural networks using redundancy regularizer
B Wu, Z Liu, Z Yuan, G Sun, C Wu
Artificial Neural Networks and Machine Learning–ICANN 2017: 26th …, 2017
Ptq4vit: Post-training quantization framework for vision transformers
Z Yuan, C Xue, Y Chen, Q Wu, G Sun
arXiv preprint arXiv:2111.12293 3 (7), 2021
NAS4RRAM: neural network architecture search for inference on RRAM-based accelerators
Z Yuan, J Liu, X Li, L Yan, H Chen, B Wu, Y Yang, G Sun
Science China Information Sciences 64 (6), 160407, 2021
Latency-aware spatial-wise dynamic networks
Y Han, Z Yuan, Y Pu, C Xue, S Song, G Sun, G Huang
Advances in Neural Information Processing Systems 35, 36845-36857, 2022
Pb-llm: Partially binarized large language models
Y Shang, Z Yuan, Q Wu, Z Dong
arXiv preprint arXiv:2310.00034, 2023
Using data compression for optimizing FPGA-based convolutional neural network accelerators
Y Guan, N Xu, C Zhang, Z Yuan, J Cong
International workshop on advanced parallel processing technologies, 14-26, 2017
Asvd: Activation-aware singular value decomposition for compressing large language models
Z Yuan, Y Shang, Y Song, Q Wu, Y Yan, G Sun
arXiv preprint arXiv:2312.05821, 2023
Latency-aware unified dynamic networks for efficient image recognition
Y Han, Z Liu, Z Yuan, Y Pu, C Wang, S Song, G Huang
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024
LLM Inference Unveiled: Survey and Roofline Model Insights
Z Yuan, Y Shang, Y Zhou, Z Dong, C Xue, B Wu, Z Li, Q Gu, YJ Lee, ...
arXiv preprint arXiv:2402.16363, 2024
Enas4d: Efficient multi-stage cnn architecture search for dynamic inference
Z Yuan, X Liu, B Wu, G Sun
arXiv preprint arXiv:2009.09182, 2020
Crane: Mitigating accelerator under-utilization caused by sparsity irregularities in cnns
Y Guan, G Sun, Z Yuan, X Li, N Xu, S Chen, J Cong, Y Xie
IEEE Transactions on Computers 69 (7), 931-943, 2020
Wkvquant: Quantizing weight and key/value cache for large language models gains more
Y Yue, Z Yuan, H Duanmu, S Zhou, J Wu, L Nie
arXiv preprint arXiv:2402.12065, 2024
Reconfigurable asic implementation of asynchronous recurrent neural networks
S Nelson, SY Kim, J Di, Z Zhou, Z Yuan, G Sun
2021 27th IEEE International Symposium on Asynchronous Circuits and Systems …, 2021
A survey on efficient inference for large language models
Z Zhou, X Ning, K Hong, T Fu, J Xu, S Li, Y Lou, L Wang, Z Yuan, X Li, ...
arXiv preprint arXiv:2404.14294, 2024
The system can't perform the operation now. Try again later.
Articles 1–20