Follow
Omar Mohamed Awad
Title
Cited by
Cited by
Year
Gobo: Quantizing attention-based nlp models for low latency and energy efficient inference
AH Zadeh, I Edo, OM Awad, A Moshovos
2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture …, 2020
1142020
Tensordash: Exploiting sparsity to accelerate deep neural network training
M Mahmoud, I Edo, AH Zadeh, OM Awad, G Pekhimenko, J Albericio, ...
2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture …, 2020
642020
Shapeshifter: Enabling fine-grain data width adaptation in deep learning
AD Lascorz, S Sharify, I Edo, DM Stuart, OM Awad, P Judd, M Mahmoud, ...
Proceedings of the 52nd Annual IEEE/ACM International Symposium on …, 2019
402019
Security implications of intentional capacitive crosstalk
C Kison, OM Awad, M Fyrbiak, C Paar
IEEE Transactions on Information Forensics and Security 14 (12), 3246-3258, 2019
292019
Fpraker: A processing element for accelerating neural network training
OM Awad, M Mahmoud, I Edo, AH Zadeh, C Bannon, A Jayarajan, ...
MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture …, 2021
132021
Compressing Pre-trained Language Models using Progressive Low Rank Decomposition
H Hajimolahoseini, M Rezagholizadeh, V Partovinia, M Tahaei, OM Awad, ...
Advances in Neural Information Processing Systems, 2021
12021
GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy Efficient Inference
A Hadi Zadeh, I Edo, OM Awad, A Moshovos
arXiv e-prints, arXiv: 2005.03842, 2020
12020
Tensordash: Exploiting sparsity to accelerate deep neural network training and inference
M Mahmoud, IE Vivancos, O Awad, AH Zadeh, G Pekhimenko, J Albericio, ...
Arxiv preprint cs. AR, 0
1
GQKVA: Efficient Pre-training of Transformers by Grouping Queries, Keys, and Values
F Javadi, W Ahmed, H Hajimolahoseini, F Ataiefard, M Hassanpour, ...
arXiv preprint arXiv:2311.03426, 2023
2023
Quantization for neural network computation
A Moshovos, AH Zadeh, IE Vivancos, OM Awad
US Patent App. 18/026,927, 2023
2023
System and method for accelerating training of deep learning networks
OM Awad, M Mahmoud, A Moshovos
US Patent App. 18/005,717, 2023
2023
Improving Resnet-9 Generalization Trained on Small Datasets
OM Awad, H Hajimolahoseini, M Lim, G Gosal, W Ahmed, Y Liu, G Deng
arXiv preprint arXiv:2309.03965, 2023
2023
cuSCNN: an Efficient CUDA Implementation of Sparse CNNs
MA Elgammal, OM Awad, IE Vivancos, A Moshovos, V Betz
Proceedings of the 13th International Symposium on Highly Efficient …, 2023
2023
Quantization for neural network computation
A Moshovos, AH Zadeh, IE Vivancos, OM Awad
US Patent App. 17/130,690, 2022
2022
FPRaker: Exploiting Fine-grain Sparsity to Accelerate Neural Network Training
OAMA Mohamed Awad
2020
The system can't perform the operation now. Try again later.
Articles 1–15