Follow
Yaman Umuroglu
Yaman Umuroglu
Research Scientist at AMD
Verified email at amd.com
Title
Cited by
Cited by
Year
FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Y Umuroglu, NJ Fraser, G Gambardella, M Blott, P Leong, M Jahre, ...
Proceedings of the 2017 ACM/SIGDA International Symposium on Field …, 2017
11712017
FINN-R An End-to-End Deep-Learning Framework for Fast Exploration of Quantized Neural Networks
M Blott, TB Preußer, NJ Fraser, G Gambardella, K O’brien, Y Umuroglu, ...
ACM Transactions on Reconfigurable Technology and Systems (TRETS) 11 (3), 1-23, 2018
3802018
BISMO: A Scalable Bit-Serial Matrix Multiplication Overlay for Reconfigurable Computing
Y Umuroglu, L Rasnayake, M Sjalander
Field Programmable Logic and Applications (FPL), 2018 28th International …, 2018
1102018
LogicNets: Co-designed neural networks and circuits for extreme-throughput applications
Y Umuroglu, Y Akhauri, NJ Fraser, M Blott
2020 30th International Conference on Field-Programmable Logic and …, 2020
892020
Scaling Binarized Neural Networks on Reconfigurable Logic
NJ Fraser, Y Umuroglu, G Gambardella, M Blott, P Leong, M Jahre, ...
Proceedings of the 8th Workshop and 6th Workshop on Parallel Programming and …, 2017
732017
Hybrid Breadth-First Search on a Single-Chip FPGA-CPU Heterogeneous Platform
Y Umuroglu, D Morrison, M Jahre
Field Programmable Logic and Applications (FPL), 2015 25th International …, 2015
652015
Streamlined deployment for quantized neural networks
Y Umuroglu, M Jahre
arXiv preprint arXiv:1709.04060, 2017
422017
Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference
B Hawks, J Duarte, NJ Fraser, A Pappalardo, N Tran, Y Umuroglu
Frontiers in Artificial Intelligence 4, 676564, 2021
402021
An Energy Efficient Column-Major Backend for FPGA SpMV Accelerators
Y Umuroglu, M Jahre
Computer Design (ICCD), 2014 32nd IEEE International Conference on, 432-439, 2014
342014
Binary neural networks on progammable integrated circuits
Y Umuroglu, M Blott
US Patent 10,089,577, 2018
262018
Elastic-df: Scaling performance of dnn inference in fpga clouds through automatic partitioning
T Alonso, L Petrica, M Ruiz, J Petri-Koenig, Y Umuroglu, I Stamelos, ...
ACM Transactions on Reconfigurable Technology and Systems (TRETS) 15 (2), 1-34, 2021
252021
Optimizing bit-serial matrix multiplication for reconfigurable computing
Y Umuroglu, D Conficconi, L Rasnayake, TB Preusser, M Själander
ACM Transactions on Reconfigurable Technology and Systems (TRETS) 12 (3), 1-24, 2019
242019
Evaluation of optimized CNNs on heterogeneous accelerators using a novel benchmarking approach
M Blott, NJ Fraser, G Gambardella, L Halder, J Kath, Z Neveu, ...
IEEE Transactions on Computers 70 (10), 1654-1669, 2020
192020
A Vector Caching Scheme for Streaming FPGA SpMV Accelerators.
Y Umuroglu, M Jahre
Applied Reconfigurable Computing, 15-26, 2015
152015
Open-source FPGA-ML codesign for the MLPerf Tiny Benchmark
H Borras, G Di Guglielmo, J Duarte, N Ghielmetti, B Hawks, S Hauck, ...
arXiv preprint arXiv:2206.11791, 2022
102022
Scaling neural network performance through customized hardware architectures on reconfigurable logic
M Blott, TB Preußer, N Fraser, G Gambardella, K O'Brien, Y Umuroglu, ...
2017 IEEE International Conference on Computer Design (ICCD), 419-422, 2017
102017
Towards efficient quantized neural network inference on mobile devices: work-in-progress
Y Umuroglu, M Jahre
Proceedings of the 2017 International Conference on Compilers, Architectures …, 2017
82017
Random Access Schemes for Efficient FPGA SpMV Acceleration
Y Umuroglu, M Jahre
Microprocessors and Microsystems 47, 321-332, 2016
82016
RadioML meets FINN: Enabling future RF applications with FPGA streaming architectures
F Jentzsch, Y Umuroglu, A Pappalardo, M Blott, M Platzner
IEEE Micro 42 (6), 125-133, 2022
72022
Qonnx: Representing arbitrary-precision quantized neural networks
A Pappalardo, Y Umuroglu, M Blott, J Mitrevski, B Hawks, N Tran, ...
arXiv preprint arXiv:2206.07527, 2022
72022
The system can't perform the operation now. Try again later.
Articles 1–20