December 22, 2018: Our paper “Efficient Nonconvex Empirical Risk Minimization via Adaptive Sample Size Methods” is accepted to AISTATS 2019.
November 7, 2018: Our paper “A Newton-based Method for Nonconvex Optimization with Fast Evasion of Saddle Points,” is accepted for publication in SIAM Journal on Optimization.
November 6, 2018: Delivered the talk “Escaping saddle points in constrained optimization” at the 2018 INFORMS Annual Meeting.
October 29, 2018: I’m co-chairing two sessions on “Large-scale Optimization” and “Optimization for Machine Learning” at the 2018 INFORMS Annual Meeting. Please stop by the sessions if you are at the conference.
September 5, 2018: I’m honored to receive the Joseph and Rosaline Wolf Award for Best Doctoral Dissertation granted by the Department of Electrical and Systems Engineering of the University of Pennsylvania.
September 4, 2018: Our following papers are accepted for spotlight presentation at NIPS 2018:
- Direct Runge-Kutta Discretization Achieves Acceleration
- Escaping Saddle Points in Constrained Optimization
September 4, 2018: New paper out “A Primal-Dual Quasi-Newton Method for Exact Consensus Optimization”.
August 14, 2018: Delivered the talk “Achieving Acceleration via Direct Discretization of Heavy-Ball Ordinary Differential Equation” at the DIMACS/TRIPODS workshop on Optimization and Machine Learning.
July 13, 2018: Our following papers are accepted to CDC 2018:
- Quantized Decentralized Consensus Optimization
- A Newton Method for Faster Navigation in Cluttered Environments
July 2, 2018: New paper out “Quantized Decentralized Consensus Optimization”.
June 4, 2018: My Ph.D. dissertation has been selected as the Penn nominee for the 2018 CGS/ProQuest Distinguished Dissertation Award in Mathematics, Physical Sciences, and Engineering.
May 11, 2018: Our following papers are accepted to ICML 2018:
- Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings
- Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication
May 2, 2018: New paper out “Direct Runge-Kutta Discretization Achieves Acceleration”.
April 30, 2018: I will serve on the Technical Program Committee (TPC) of the symposium on “Distributed Learning and Optimization over Networks” for the Global SIP 2018. Please consider submitting your work to this symposium.
April 25, 2018: New paper out “Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization”.
March 23-24, 2018: I’m chairing two sessions on “Sobmodular Optimization” and “Nonconvex Optimization” at the INFORMS Optimization Society Conference. Please stop by the sessions if you are around. You can find the program here.
February 11, 2018: New paper out “Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings”.
February 1, 2018: Our paper “IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate,” is accepted for publication in SIAM Journal on Optimization.
January 29, 2018: Our paper “Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate” is accepted for publication in SIAM Journal on Optimization.
January 29, 2018: Our paper “Parallel Stochastic Successive Non-convex Approximation Method for Large-scale Dictionary Learning” is accepted to ICASSP 2018.
January 1, 2018: Officially started as a Postdoctoral Associate in the Laboratory for Information and Decision Systems (LIDS) at MIT.
December 22, 2017: Our paper “Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap” is accepted to AISTATS 2018.
December 22, 2017: Our paper “Large Scale Empirical Risk Minimization via Truncated Adaptive Newton Method” is accepted to AISTATS 2018.
December 8, 2017: Presented our paper “Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap” at NIPS Workshop on Discrete Structures in Machine Learning (DISCML).
December 4, 2017: Presented our paper “First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization” at NIPS 2017.
November 22, 2017: Our paper entitled “Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap” is accepted for presentation in Discrete Structures in Machine Learning (DISCML) Workshop at NIPS 2017.
November 6, 2017: New paper out “Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap”
October 29, 2017: Alejandro and I have organized a session on “Distributed Methods for Large-Scale Optimization” with four fantastic speakers for Asilomar 2017. Please drop by if you are attending the conference. For more information please check the program.
October 24, 2017: Presented our work entitled “Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate” at the 2017 INFORMS annual meeting. The slides are available here.
September 13, 2017: Hamed and I are organizing a session on “Submodular Maximization” for the 2018 INFORMS Optimization Society Conference.
September 12, 2017: Alejandro, Santiago, and I are organizing a session on “Algorithms for Nonconvex Optimization” for the 2018 INFORMS Optimization Society Conference.
September 4, 2017: Our paper “First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization” is accepted for presentation at the 2017 Conference on Neural Information Processing Systems (NIPS).
September 2, 2017: New paper out “First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization“
August 22, 2017: Presented our work on “Incremental Quasi-Newton Methods with Local Superlinear Convergence Rate” at the DIMACS Workshop on Distributed Optimization, Information Processing, and Learning.
August 16, 2017: Started at Simons Institute for the Theory of Computing as a Research Fellow for the program on “Bridging Continuous and Discrete Optimization“.
July 27, 2017: New paper out “A Second Order Method for Nonconvex Optimization“
July 24, 2017: Successfully defended my Ph.D. thesis entitled “Efficient Methods for Large-Scale Empirical Risk Minimization“.
June 29, 2017: Our poster on “Incremental Quasi-Newton Methods with Local Superlinear Convergence Rate” is accepted for presentation at the DIMACS Workshop on Distributed Optimization, Information Processing, and Learning.
May 24, 2017: Check out our new paper “Large Scale Empirical Risk Minimization via Truncated Adaptive Newton Method“.
April 25, 2017: Our paper “Decentralized Quasi-Newton Methods” has been among the top 50 most frequently accessed documents in IEEE Transactions on Signal Processing for the month of March 2017.
March 28, 2017: Our paper “Decentralized Prediction-Correction Methods for Networked Time-Varying Convex Optimization” is accepted for publication in the IEEE Transactions on Automatic Control.
March 20, 2017: I will serve on the Technical Program Committee (TPC) of the Symposium on “Distributed Optimization and Resource Management over Networks” for the 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP 2017).
March 8, 2017: Presented our recent work on “An Incremental Quasi-Newton Method with a Local Superlinear Convergence Rate” at ICASSP 2017. The slides are available here. You can also find the posters for my other ICASSP papers below.
- A Double Incremental Aggregated Gradient Method with Linear Convergence Rate for Large-Scale Optimization
- A Diagonal-Augmented Quasi-Newton Method with Application to Factorization Machines
- Large-Scale Nonconvex Stochastic Optimization by Doubly Stochastic Successive Convex Approximation
February 26, 2017: Our paper “Stochastic Averaging for Constrained Optimization with Application to Online Resource Allocation” is accepted for publication in the IEEE Transaction on Signal Processing.
February 15, 2017: Presented our recent work on “Incremental Quasi-Newton Methods with Local Superlinear Convergence Rate” at the ITA 2017 Graduation Day. Please find the slides here.
February 7, 2017: Alejandro presented our joint work on “High Order Methods for Empirical Risk Minimization” at IPAM Workshop on Emerging Wireless Networks organized by UCLA. The slides are available here.
February 2, 2017: New paper out “IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate“.
January 31, 2017: I received a Research Fellowship from the Simons Institute for the Theory of Computing at UC Berkeley for the program on “Bridging Continuous and Discrete Optimization” (Fall 2017).
January 28, 2017: Our paper “Decentralized Quasi-Newton Methods” is accepted for publication in IEEE Transaction on Signal Processing.
January 27, 2017: Tech Talk at Google Research, Mountain View, CA. Title: “High Order Methods for Empirical Risk Minimization”.
January 24, 2017: Alejandro and I will organize a session on “Distributed Optimization and Learning” for the 2017 Asilomar Conference on Signals, Systems, and Computers.
January 4, 2017: Nominated by the ESE Department of UPenn to give an oral presentation at the ITA 2017 Graduation Day.
December 25, 2016: Our paper “Network Newton Distributed Optimization Methods” has been among the top 50 most frequently accessed documents in IEEE Transactions on Signal Processing for the month of November 2016.
December 14, 2016: Presented “Online Optimization in Dynamic Environments: Improved Regret Rates for Strongly Convex Problems” at the 55th IEEE Conference on Decision and Control. My other CDC papers “A Decentralized Quasi-Newton Method for Dual Formulations of Consensus Optimization” and “A Decentralized Second-Order Method for Dynamic Optimization” were presented by Mark and Wei, respectively.
December 12, 2016: The following papers are accepted for publication in Proc. of ICASSP 2017.
- An Incremental Quasi-Newton Method with a Local Superlinear Convergence Rate
- A Double Incremental Aggregated Gradient Method with Linear Convergence Rate for Large-Scale Optimization
- Large-Scale NonConvex Stochastic Optimization by Doubly Stochastic Successive Convex Approximation
- A Diagonal-Augmented Quasi-Newton Method with Application to Factorization Machines
December 7-9, 2016: I’m in Washington DC, attending the 2016 Global Conference on Signal and Information Processing (Global SIP). Below you can find the list of my papers at GlobalSIP 2016.
- A Data-driven Approach to Stochastic Network Optimization
- Decentralized Constrained Consensus Optimization with Primal-Dual Splitting Projection
- An Asynchronous Quasi-Newton Method for Consensus Optimization
December 5, 2016: The GAPSA Research Travel Grant Committee awarded me a financial support for my expenses at the 2016 Asilomar Conference on Signals, Systems, and Computers.
November 21, 2016: Ph.D. Proposal, Title: “Effiecient Methods for Large-Scale Optimization”. The slides are available here.
November 16, 2016: Presented “DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers” at the INFORMS Annual Meeting in Nashville.
November 7, 2016: Presented “ESOM: Exact Second-Order Method for Consensus Optimization” at the 50th Asilomar Conference on Signals, Systems, and Computers. Alecpresented our other paper “Doubly Stochastic Algorithms for Large-Scale Optimization“.
November 1, 2016: New paper out “Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate“.
October 7, 2016: New paper out “Stochastic Averaging for Constrained Optimization with Application to Online Resource Allocation“.