I am currently recruiting motivated PhD students as well as graduate student interns to work on projects in alternative learning methods for deep networks, distributed/decentralized optimization, and the interesection of these. Please send me an informal inquiry if you are interested with your CV and optionally a short description of motivations and research interests. In your email subject please put [Recruiting]. Note I am not be able to respond to all emails. Students should demonstrate strong programming and math skills. Previous research experience, particularly in machine learning is a plus. Below are some related publications to give an idea of the research project
- Nokland et al, Direct Feedback Alignment Provides Learning in Deep Neural Networks NeurIPS 2016
- Laskin et al, Parallel Training of Deep Networks with Local Updates ICLR 2020
- Belilovsky, Eugene, Michael Eickenberg, and Edouard Oyallon. “Decoupled greedy learning of cnns.” ICML 2020.
- Xu, An, Zhouyuan Huo, and Heng Huang. “On the Acceleration of Deep Learning Model Parallelism With Staleness.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
- Choromanska, Anna, et al. “Beyond backprop: Online alternating minimization with auxiliary variables.” International Conference on Machine Learning. 2019.
- Lian et al, Asynchronous Decentralized Parallel Stochastic Gradient Descent, ICML 2018