Follow
Liqiang Lu
Title
Cited by
Cited by
Year
Evaluating fast algorithms for convolutional neural networks on FPGAs
L Lu, Y Liang, Q Xiao, S Yan
2017 IEEE 25th annual international symposium on field-programmable custom …, 2017
2822017
Exploring heterogeneous algorithms for accelerating deep convolutional neural networks on FPGAs
Q Xiao, Y Liang, L Lu, S Yan, YW Tai
Proceedings of the 54th Annual Design Automation Conference 2017, 1-6, 2017
2232017
An efficient hardware accelerator for sparse convolutional neural networks on FPGAs
L Lu, J Xie, R Huang, J Zhang, W Lin, Y Liang
2019 IEEE 27th Annual International Symposium on Field-Programmable Custom …, 2019
1572019
Evaluating fast algorithms for convolutional neural networks on FPGAs
Y Liang, L Lu, Q Xiao, S Yan
IEEE Transactions on Computer-Aided Design of Integrated Circuits and …, 2019
1472019
SpWA: An efficient sparse winograd convolutional neural networks accelerator on FPGAs
L Lu, Y Liang
Proceedings of the 55th Annual Design Automation Conference, 1-6, 2018
1182018
Sanger: A co-design framework for enabling sparse attention using reconfigurable architecture
L Lu, Y Jin, H Bi, Z Luo, P Li, T Wang, Y Liang
MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture …, 2021
512021
Tenet: A framework for modeling tensor dataflow based on relation-centric notation
L Lu, N Guan, Y Wang, L Jia, Z Luo, J Yin, J Cong, Y Liang
2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture …, 2021
502021
AMOS: enabling automatic mapping for tensor computations on spatial accelerators with hardware abstraction
S Zheng, R Chen, A Wei, Y Jin, Q Han, L Lu, B Wu, X Li, S Yan, Y Liang
Proceedings of the 49th Annual International Symposium on Computer …, 2022
342022
OMNI: A framework for integrating hardware and software optimizations for sparse CNNs
Y Liang, L Lu, J Xie
IEEE Transactions on Computer-Aided Design of Integrated Circuits and …, 2020
302020
Hangrui Bi, Zizhang Luo, Peng Li, Tao Wang, and Yun Liang. 2021. Sanger: A co-design framework for enabling sparse attention using reconfigurable architecture
L Lu, Y Jin
MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture …, 2021
272021
An efficient hardware design for accelerating sparse CNNs with NAS-based models
Y Liang, L Lu, Y Jin, J Xie, R Huang, J Zhang, W Lin
IEEE Transactions on Computer-Aided Design of Integrated Circuits and …, 2021
262021
Exploiting sparsity to accelerate fully connected layers of cnn-based applications on mobile socs
X Xie, D Du, Q Li, Y Liang, WT Tang, ZL Ong, M Lu, HP Huynh, RSM Goh
ACM Transactions on Embedded Computing Systems (TECS) 17 (2), 1-25, 2017
252017
Tensorlib: A spatial accelerator generation framework for tensor algebra
L Jia, Z Luo, L Lu, Y Liang
2021 58th ACM/IEEE Design Automation Conference (DAC), 865-870, 2021
232021
Generating systolic array accelerators with reusable blocks
L Jia, L Lu, X Wei, Y Liang
IEEE Micro 40 (4), 85-92, 2020
202020
Enabling efficient fast convolution algorithms on GPUs via MegaKernels
L Jia, Y Liang, X Li, L Lu, S Yan
IEEE Transactions on Computers 69 (7), 986-997, 2020
182020
FCNNLib: An efficient and flexible convolution algorithm library on FPGAs
Q Xiao, L Lu, J Xie, Y Liang
2020 57th ACM/IEEE Design Automation Conference (DAC), 1-6, 2020
102020
Convolution acceleration and computing processing method and apparatus, electronic device, and storage medium
LU Liqiang, Y Liang, X Qingcheng, YAN Shengen
US Patent 11,429,852, 2022
72022
Accelerating convolutional neural networks on FPGAs
LQ Lu, S Zheng, QC Xiao, DM Chen, Y Liang
Scientia Sinica Informationis 49 (3), 277-294, 2019
72019
FCNNLib: A flexible convolution algorithm library for deep learning on FPGAs
Y Liang, Q Xiao, L Lu, J Xie
IEEE Transactions on Computer-Aided Design of Integrated Circuits and …, 2021
52021
Speedy: An accelerator for sparse convolutional neural networks on fpgas
L Lu, Y Liang, R Huang, W Lin, X Cui, J Zhang
Proceedings of the 2019 ACM/SIGDA International Symposium on Field …, 2019
52019
The system can't perform the operation now. Try again later.
Articles 1–20