Photo of Logan Engstrom

Logan Engstrom

Email: engstrom@mit.edu
Google Scholar: here
CV/Resume: here
GitHub: @lengstrom

About

I'm a third year PhD student at MIT EECS advised by Aleksander Mądry and funded by a Google PhD Fellowship. I'm most interested in machine learning robustness and generalization.

I went to MIT for undergrad and grew up in beautiful Lincoln, MA. Outside of research I contribute to open source and play pickup soccer.

[publications] [preprints] [open source]

Publications

  1. Andrew Ilyas, Sam Park, Logan Engstrom, Guillaume LeClerc, Aleksander Madry. Datamodels: Predicting Predictions from Training Data. ICML 2022.

  2. Hadi Salman, Andrew Ilyas, Logan Engstrom, Sai Vemprala, Aleksander Madry, Ashish Kapoor. Unadversarial Examples: Designing Objects for Robust Vision. NeurIPS 2021.

  3. Kai Xiao, Logan Engstrom, Andrew Ilyas, and Aleksander Madry. Noise or signal: The role of image backgrounds in object recognition. ICLR 2021.

  4. Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, Aleksander Madry. Do Adversarially Robust ImageNet Models Transfer Better?. NeurIPS 2020 Oral Presentation.

  5. Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, and Aleksander Mądry. Identifying Statistical Bias in Dataset Replication. ICML 2020.

  6. Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Andrew Ilyas, and Aleksander Madry. From imagenet to image classification: Contextualizing progress on benchmarks. ICML 2021.

  7. Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, and Aleksander Mądry. Implementation Matters in Deep RL: A Case Study on PPO and TRPO. ICLR 2020 Oral Presentation.

  8. Andrew Ilyas, Logan Engstrom, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, and Aleksander Mądry. A Closer Look at Deep Policy Gradients. ICLR 2020 Oral Presentation.

  9. Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. NeurIPS 2019 Spotlight Presentation.

  10. Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Andrew Ilyas, Logan Engstrom, and Aleksander Madry. Computer vision with a single (robust) classifier.. NeurIPS 2019.

  11. Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Mądry. Robustness May Be at Odds with Accuracy. ICLR 2019.

  12. Andrew Ilyas, Logan Engstrom, Ludwig Schmidt, and Aleksander Mądry. Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors. ICLR 2019.

  13. Logan Engstrom, Brandon Tran, Dimitris Tsipras, Ludwig Schmidt, and Aleksander Mądry. A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations. ICML 2019.

  14. Logan Engstrom, Andrew Ilyas, and Anish Athalye. Evaluating and Understanding the Robustness of Adversarial Logit Pairing. NIPS 2018 Machine Learning and Computer Security Workshop.

  15. Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. Black-box Adversarial Attacks with Limited Queries and Information. ICML 2018.

  16. Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. Synthesizing Robust Adversarial Examples. ICML 2018, Demo at NeurIPS 2017 Machine Learning and Computer Security Workshop.

  17. Daniel Kang, Richard Sherwood, Amira Barkal, Tatsunori Hashimoto, Logan Engstrom, and David Gifford. DNase-capture Reveals Differential Transcription Factor Binding modalities. PloS one, 2017.

Preprints

  1. Guillaume Leclerc, Hadi Salman, Andrew Ilyas, Sai Vemprala, Logan Engstrom, Vibhav Vineet, Kai Xiao, Pengchuan Zhang, Shibani Santurkar, Greg Yang, Ashish Kapoor, and Aleksander Madry. 3db: A framework for debugging computer vision models. 2021.
  2. Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Brandon Tran, and Aleksander Madry. Learning perceptually-aligned representations via adversarial robustness. 2019.

Open Source