Our group conducts research on fundamental problems in machine learning, convex optimization and game theory.
Online learning is a mathematical optimization framework that captures problems that take the form of playing a repeated game against an adversary. Examples for problems of interest that fall into the online learning framework include: online portfolio selection, designing recommendation systems, online navigation, time series analysis, statistical learning and more. The quality of online learning algorithms is measured by a quantity known as regret which is the difference between the performance of the online algorithm and that of an optimal offline benchmark.
Our group develops state-of-the-art algorithms for various online learning problems with the emphasis on computational efficiency and optimal prediction/regret bounds. These algorithms range from general-case online learning to domain-specific. Recent examples include Adversarial MDP Learning, Online Matrix Predication, Time Series Prediction, Linear Regression with Limited Feedback, Universal Filtering
Efficient Optimization for Machine Learning
In many modern data-intensive applications, computational efficiency is a major concern any super-linear time operation is intractable. A major research effort of our group is dedicated to the development of highly-efficient, linear or sublinear-time algorithms for very-large scale optimization problems, usually in the context of machine learning. Recent directions include:
- Development of sublinear time algorithms for various large-scale machine learning problems such as linear clasiffication, training support vector machines and semidefinite optimization.
- Development of Projection-free algorithms for convex optimization and online learning. The computational bottleneck in applying first order optimization methods for solving large-scale complex models is an algorithmic procedure known as projection, that requires to solve a quadratic problem over a convex set. We design algorithms that avoid the use of the projection procedure and trade it with a potentially much more efficient linear optimization problem over the convex set. Recent results include a projection-free algorithm for online learning and a linearly convergent conditional gradient method with application to online learning and stochastic optimization.