Follow
Varun Chandrasekaran
Title
Cited by
Cited by
Year
Sparks of artificial general intelligence: Early experiments with gpt-4
S Bubeck, V Chandrasekaran, R Eldan, J Gehrke, E Horvitz, E Kamar, ...
arXiv preprint arXiv:2303.12712, 2023
26312023
Machine unlearning
L Bourtoule, V Chandrasekaran, CA Choquette-Choo, H Jia, A Travers, ...
2021 IEEE Symposium on Security and Privacy (SP), 141-159, 2021
6322021
Entangled watermarks as a defense against model extraction
H Jia, CA Choquette-Choo, V Chandrasekaran, N Papernot
30th USENIX security symposium (USENIX Security 21), 1937-1954, 2021
2322021
Exploring connections between active learning and model extraction
V Chandrasekaran, K Chaudhuri, I Giacomelli, S Jha, S Yan
29th USENIX Security Symposium (USENIX Security 20), 1309-1326, 2020
1542020
On the effectiveness of mitigating data poisoning attacks with gradient shaping
S Hong, V Chandrasekaran, Y Kaya, T Dumitraş, N Papernot
arXiv preprint arXiv:2002.11497, 2020
1292020
Unrolling sgd: Understanding factors influencing machine unlearning
A Thudi, G Deza, V Chandrasekaran, N Papernot
2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P), 303-319, 2022
1032022
Proof-of-learning: Definitions and practice
H Jia, M Yaghini, CA Choquette-Choo, N Dullerud, A Thudi, ...
2021 IEEE Symposium on Security and Privacy (SP), 1039-1056, 2021
852021
Face-off: Adversarial face obfuscation
V Chandrasekaran, C Gao, B Tang, K Fawaz, S Jha, S Banerjee
arXiv preprint arXiv:2003.08861, 2020
502020
A general framework for detecting anomalous inputs to dnn classifiers
J Raghuram, V Chandrasekaran, S Jha, S Banerjee
International Conference on Machine Learning, 8764-8775, 2021
40*2021
Powercut and obfuscator: an exploration of the design space for privacy-preserving interventions for voice assistants
V Chandrasekaran, S Banerjee, B Mutlu, K Fawaz
arXiv preprint arXiv:1812.00263, 2018
36*2018
Traversing the quagmire that is privacy in your smart home
C Gao, V Chandrasekaran, K Fawaz, S Banerjee
Proceedings of the 2018 Workshop on IoT Security and Privacy, 22-28, 2018
322018
Analyzing and improving neural networks by generating semantic counterexamples through differentiable rendering
L Jain, V Chandrasekaran, U Jang, W Wu, A Lee, A Yan, S Chen, S Jha, ...
arXiv preprint arXiv:1910.00727, 2019
30*2019
SoK: Machine learning governance
V Chandrasekaran, H Jia, A Thudi, A Travers, M Yaghini, N Papernot
arXiv preprint arXiv:2109.10870, 2021
202021
A framework for analyzing spectrum characteristics in large spatio-temporal scales
Y Zeng, V Chandrasekaran, S Banerjee, D Giustiniano
The 25th Annual International Conference on Mobile Computing and Networking …, 2019
202019
Verifiable and provably secure machine unlearning
T Eisenhofer, D Riepel, V Chandrasekaran, E Ghosh, O Ohrimenko, ...
arXiv preprint arXiv:2210.09126, 2022
182022
Proof-of-learning is currently more broken than you think
C Fang, H Jia, A Thudi, M Yaghini, CA Choquette-Choo, N Dullerud, ...
2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P), 797-816, 2023
15*2023
Attention satisfies: A constraint-satisfaction lens on factual errors of language models
M Yuksekgonul, V Chandrasekaran, E Jones, S Gunasekar, R Naik, ...
arXiv preprint arXiv:2309.15098, 2023
142023
Teaching language models to hallucinate less with synthetic tasks
E Jones, H Palangi, C Simões, V Chandrasekaran, S Mukherjee, A Mitra, ...
arXiv preprint arXiv:2310.06827, 2023
102023
Confidant: A privacy controller for social robots
B Tang, D Sullivan, B Cagiltay, V Chandrasekaran, K Fawaz, B Mutlu
2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI …, 2022
102022
Diversity of thought improves reasoning abilities of large language models
R Naik, V Chandrasekaran, M Yuksekgonul, H Palangi, B Nushi
arXiv preprint arXiv:2310.07088, 2023
62023
The system can't perform the operation now. Try again later.
Articles 1–20