Monolithically integrated RRAM-and CMOS-based in-memory computing optimizations for efficient deep learning S Yin, Y Kim, X Han, H Barnaby, S Yu, Y Luo, W He, X Sun, JJ Kim, J Seo IEEE Micro 39 (6), 54-63, 2019 | 82 | 2019 |
Area-efficient and variation-tolerant in-memory BNN computing using 6T SRAM array J Kim, J Koo, T Kim, Y Kim, H Kim, S Yoo, JJ Kim 2019 Symposium on VLSI Circuits, C118-C119, 2019 | 80 | 2019 |
2-bit-per-cell RRAM-based in-memory computing for area-/energy-efficient deep learning W He, S Yin, Y Kim, X Sun, JJ Kim, S Yu, JS Seo IEEE Solid-State Circuits Letters 3, 194-197, 2020 | 48 | 2020 |
Input-splitting of large neural networks for power-efficient accelerator with resistive crossbar memory array Y Kim, H Kim, D Ahn, JJ Kim Proceedings of the International Symposium on Low Power Electronics and …, 2018 | 36 | 2018 |
Bitblade: Energy-efficient variable bit-precision hardware accelerator for quantized neural networks S Ryu, H Kim, W Yi, E Kim, Y Kim, T Kim, JJ Kim IEEE Journal of Solid-State Circuits 57 (6), 1924-1935, 2022 | 25 | 2022 |
Neural network-hardware co-design for scalable RRAM-based BNN accelerators Y Kim, H Kim, JJ Kim arXiv preprint arXiv:1811.02187, 2018 | 21 | 2018 |
In-memory batch-normalization for resistive memory based binary neural network hardware H Kim, Y Kim, JJ Kim Proceedings of the 24th Asia and South Pacific Design Automation Conference …, 2019 | 20 | 2019 |
Time-delayed convolutions for neural network device and method S Kim, J Kim, KIM Yulhwa, J Kim, D Park, H Kim US Patent 11,521,046, 2022 | 16 | 2022 |
Algorithm/hardware co-design for in-memory neural network computing with minimal peripheral circuit overhead H Kim, Y Kim, S Ryu, JJ Kim 2020 57th ACM/IEEE Design Automation Conference (DAC), 1-6, 2020 | 12 | 2020 |
A 44.1 TOPS/W precision-scalable accelerator for quantized neural networks in 28nm CMOS S Ryu, H Kim, W Yi, J Koo, E Kim, Y Kim, T Kim, JJ Kim 2020 IEEE Custom Integrated Circuits Conference (CICC), 1-4, 2020 | 12 | 2020 |
Effect of device variation on mapping binary neural network to memristor crossbar array W Yi, Y Kim, JJ Kim 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE), 320-323, 2019 | 8 | 2019 |
Extreme partial-sum quantization for analog computing-in-memory neural network accelerators Y Kim, H Kim, JJ Kim ACM Journal on Emerging Technologies in Computing Systems (JETC) 18 (4), 1-19, 2022 | 6 | 2022 |
Mapping binary resnets on computing-in-memory hardware with low-bit adcs Y Kim, H Kim, J Park, H Oh, JJ Kim 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE), 856-861, 2021 | 6 | 2021 |
Time-step interleaved weight reuse for LSTM neural network computing N Park, Y Kim, D Ahn, T Kim, JJ Kim Proceedings of the ACM/IEEE International Symposium on Low Power Electronics …, 2020 | 6 | 2020 |
Energy-efficient in-memory binary neural network accelerator design based on 8T2C SRAM cell H Oh, H Kim, D Ahn, J Park, Y Kim, I Lee, JJ Kim IEEE Solid-State Circuits Letters 5, 70-73, 2022 | 5 | 2022 |
Maximizing parallel activation of word-lines in MRAM-based binary neural network accelerators D Ahn, H Oh, H Kim, Y Kim, JJ Kim IEEE Access 9, 141961-141969, 2021 | 5 | 2021 |
Single RRAM cell-based in-memory accelerator architecture for binary neural networks H Oh, H Kim, N Kang, Y Kim, J Park, JJ Kim 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits …, 2021 | 5 | 2021 |
Compact convolution mapping on neuromorphic hardware using axonal delay J Kim, Y Kim, S Kim, JJ Kim Proceedings of the International Symposium on Low Power Electronics and …, 2018 | 4 | 2018 |
Squeezing large-scale diffusion models for mobile J Choi, M Kim, D Ahn, T Kim, Y Kim, D Jo, H Jeon, JJ Kim, H Kim arXiv preprint arXiv:2307.01193, 2023 | 3 | 2023 |
Winning both the accuracy of floating point activation and the simplicity of integer arithmetic Y Kim, J Jang, J Lee, J Park, J Kim, B Kim, SJ Kwon, D Lee The Eleventh International Conference on Learning Representations, 2022 | 3 | 2022 |