Sayeh Sharify
Sayeh Sharify
PhD, Qualcomm, Tartan AI, University of Toronto
Verified email at mail.utoronto.ca - Homepage
Title
Cited by
Cited by
Year
Bit-pragmatic deep neural network computing
J Albericio, A Delmás, P Judd, S Sharify, G O'Leary, R Genov, ...
Proceedings of the 50th Annual IEEE/ACM International Symposium on …, 2017
1202017
Loom: Exploiting weight and activation precisions to accelerate convolutional neural networks
S Sharify, AD Lascorz, K Siu, P Judd, A Moshovos
2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC), 1-6, 2018
522018
Bit-tactical: A software/hardware approach to exploiting value and bit sparsity in neural networks
A Delmas Lascorz, P Judd, DM Stuart, Z Poulos, M Mahmoud, S Sharify, ...
Proceedings of the Twenty-Fourth International Conference on Architectural …, 2019
45*2019
Laconic deep learning inference acceleration
S Sharify, AD Lascorz, M Mahmoud, M Nikolic, K Siu, DM Stuart, Z Poulos, ...
2019 ACM/IEEE 46th Annual International Symposium on Computer Architecture …, 2019
272019
Cnvlutin2: Ineffectual-activation-and-weight-free deep neural network computing
P Judd, A Delmas, S Sharify, A Moshovos
arXiv preprint arXiv:1705.00125, 2017
202017
Dynamic stripes: Exploiting the dynamic precision requirements of activation values in neural networks
A Delmas, P Judd, S Sharify, A Moshovos
arXiv preprint arXiv:1706.00504, 2017
192017
DPRed: Making Typical Activation and Weight Values Matter In Deep Learning Computing
A Delmas, S Sharify, P Judd, K Siu, M Nikolic, A Moshovos
arXiv preprint arXiv:1804.06732, 2018
11*2018
Tartan: Accelerating fully-connected and convolutional layers in deep learning networks by exploiting numerical precision variability
A Delmas, S Sharify, P Judd, A Moshovos
arXiv preprint arXiv:1707.09068, 2017
112017
Shapeshifter: Enabling fine-grain data width adaptation in deep learning
AD Lascorz, S Sharify, I Edo, DM Stuart, OM Awad, P Judd, M Mahmoud, ...
Proceedings of the 52nd Annual IEEE/ACM International Symposium on …, 2019
82019
Value-Based Deep-Learning Acceleration
A Moshovos, J Albericio, P Judd, AD Lascorz, S Sharify, T Hetherington, ...
IEEE Micro 38 (1), 41-55, 2018
52018
Exploiting Typical Values to Accelerate Deep Learning
A Moshovos, J Albericio, P Judd, AD Lascorz, S Sharify, Z Poulos, ...
Computer 51 (5), 18-30, 2018
32018
ACCELERATOR FOR DEEP NEURAL NETWORKS
P Judd, J Albericio, A Delmas Lascorz, A Moshovos, S Sharify
US Patent App. 16/504,275, 2020
12020
Identifying and Exploiting Ineffectual Computations to Enable Hardware Acceleration of Deep Learning
A Moshovos, J Albericio, P Judd, A Delmas, S Sharify, M Mahmoud, ...
2018 16th IEEE International New Circuits and Systems Conference (NEWCAS …, 2018
12018
Building an on-chip deep learning memory hierarchy brick by brick: late breaking results
IE Vivancos, S Sharify, M Nikolic, C Bannon, M Mahmoud, AD Lascorz, ...
Proceedings of the 57th ACM/EDAC/IEEE Design Automation Conference, 1-2, 2020
2020
Loom and Laconic: Hardware Accelerators for Deep Learning Algorithms
S Sharifymoghaddam
2020
Accelerating Image-Sensor-Based Deep Learning Applications
M Mahmoud, DM Stuart, Z Poulos, AD Lascorz, P Judd, S Sharify, ...
IEEE Micro 39 (5), 26-35, 2019
2019
Low-Swing Signaling for FPGA Power Reduction
S Sharifymoghaddam, A Sheikholeslami
Proceedings of the 2016 ACM/SIGDA International Symposium on Field …, 2016
2016
Low-Swing Signaling for FPGA Interconnect Power Reduction
S Sharifymoghaddam
2015
Ranganathan, Parthasarathy
TA Khan, J Kim, JS Kim, J Kim, J Kim, MA Kim, NS Kim, Y Kim, A Kolli, ...
The system can't perform the operation now. Try again later.
Articles 1–19