In this course we will look at a handful of ubiquitous algorithms in machine learning. We will cover several classical tools in machine learning but more emphasis will be given to recent advances and developing efficient and provable algorithms for learning tasks. A tentative syllabus/schedule can be found below; the topics may change based on student interests as well.



We will use piazza for the course (to ask/answer questions, to post announcements, homework etc.,). You can signup here. The class page is here.

Assignments
Assignment 1: Due Oct 12.
Assignment 2: Due Oct 26.
Assignment 3: Due Nov 4, 4PM.
Assignment 4: Due Nov 30, 10PM.
Assignment 5: Due Dec 7, 10PM.

Prerequisites: It is critical to be familiar with material from a typical undergraduate course in algorithms (equivalent to CS180), and an undergraduate course in linear algebra. If you have doubts about this please talk to me right away. Familiarity with probability will be helpful.

Course work: We will have five assignments (10%x5). Scopes of assignments: 1 - lectures 1-4; 2 - Lectures 5-8; 3 - lectures 9-12; 4 - lectures 14-17; 5 - lectures 18-20.

Mid-term - 25%, Nov 7 in class; material from lectures 1 - 12.

Project - 25%.

Assignment submission: We will use Gradescope for assignments and they have to be submitted by 10PM on their due date. This is extremely helpful both for me as well as for you - you'll get better feedback and will have a digital record of all your assignments that you can refer to later. Things to keep in mind: 1) Within a week of the course, you should receive a registration link from Gradescope. If you don't receive it before the first homework, contact me immediately; this will give you access to the website. 2) Watch this one-minute video with complete instructions. Follow them to the letter! The simple guidelines make the process considerably smoother. 3) Make sure you start each problem of an assignment on a new page. 4) To generate a PDF scan of the assignments, you can follow the instructions here; you can also use the scanners in the library.

Project: The final project can either be a cohesive literature survey of a specific topic, a research project, or an experimental project investigating different algorithms on a specific learning problem; it can even be in the form of participating in some machine learning competitions. The project will be evaluated on the basis of a five page (one-sided) report (Due by December 9th 5PM PST) which is expected to be at the level of a conference submission. The project can be done in teams of upto three students (the work will have to scale accordingly).

Resources: There is no required course text. The following links would be useful:
Sanjeev Arora's course.
Elad Hazan's course.
Ankur Moitra's course.
Draft of Foundations of Data Science by Hopcroft and Kannan.
Here are some lecture notes on gradient descent. Links to appropriate papers or other online material (typically other lecture notes) will be provided for each lecture.

Hours & Location: MW 2-3:50, Boelter Hall 5272. Office hours: Tuesdays 10:30 - 11:30, BH 3732H.


The following is a tentative list of topics to be covered.
Learning theory: what and how? (2 lectures)
How to model learning?
PAC model
Towards tractable learning models
Linearity: the swiss-army knife (3 lectures)
Best-fit subspaces, low-rank approximations, and Singular Value Decomposition
Applications of SVD
Multiplicative weights and boosting (2 lectures)
Online optimization and regret. Boosting via multiplicative weights
Optimization: the work-horse of learning (3 lectures)
Convexity primer
Learning as optimization
Gradient descent
Stochastic gradient descent
The power of convex relaxations (2 lectures)
Compressed sensing
Convexification: matrix completion, sparse PCA
Neural networks (2 lectures)
Constant-depth circuits, back propogation, and limitations
The reemergence of neural nets
Non-negative matrix factorization and Topic models (2 lectures)
Basic models and algorithms
Algorithmic stability (2 lectures)
Stability as a tool for generalization
Independent component analysis and sparse coding (1 lecture)
ICA model and method of fourth moments

Academic honesty: The students are expected to fully abide by UCLA's student conduct policies, including Section 102.01 on academic honesty. You will find a wealth of helpful materials here, including the Student Guide to Academic Integrity. Academic dishonesty will be promptly reported to the Dean of Students' Office for adjudication and disciplinary action. Remember, cheating will have significant and irrevocable consequences for your academic record and professional future. Please don't cheat.

While collaboration with other students on assignments is fine, you should clearly mention the collaborators. You should make your own slides and when you use content from another source, you should explicitly state so. Under no circumstances may you use code directly from resources on the web without explicitly citing the source.