Description
1 Gradient Descent


(3 points) We often use iterative optimization algorithms such as Gradient Descent to find w that minimizes a loss function f(w). Recall that in gradient descent, we start with an initial

value of w (say w(1)) and iteratively take a step in the direction of the negative of the gradient of the objective function i.e.






_{w}(t+1) _{=} _{w}(t) _{rf(w}(t)_{)}
(1)





for learning rate > 0.
In this question, we will develop a slightly deeper understanding of this update rule, in particular for minimizing a convex function f(w). Note: this analysis will not directly carry over to training neural networks since loss functions for training neural networks are typically not convex, but this will (a) develop intuition and (b) provide a starting point for research in nonconvex optimization (which is beyond the scope of this class).
Recall the firstorder Taylor approximation of f at w(t):





f(w) f(w^{(t)}) + hw w^{(t)}; rf(w^{(t)})i
(2)




When f is convex, this approximation forms a lower bound of f, i.e.





f(w) f(w^{(t)}) + hw
w^{(t)}; rf(w^{(t)})i
8w
(3)

{z
}




aﬃne lower bound to f( )
Since this approximation is a ‘simpler’ function than f( ), we could consider minimizing the approximation instead of f( ). Two immediate problems: (1) the approximation is aﬃne (thus unbounded from below) and (2) the approximation is faithful for w close to w(t). To solve both problems, we add a squared ‘_{2} proximity term to the approximation minimization:




w
aﬃne
^{h}lower bound to f( )
2
argmin f(w^{(t)}) +
w w^{(t)}; rf(w^{(t)})i +
2
w
_{w}(t)
(4)
tradeoﬀ

{z
}
{z}

{z
}
proximity term



Notice that the optimization problem above is an unconstrained quadratic programming problem, meaning that it can be solved in closed form (hint: gradients).
What is the solution w of the above optimization? What does that tell you about the gradient descent update rule? What is the relationship between and ?

(3 points) Let’s prove a lemma that will initially seem devoid of the rest of the analysis but will come in handy in the next subquestion when we start combining things. Specifically, the analysis in this subquestion holds for any w?, but in the next subquestion we will use it for w^{?} that minimizes f(w).
Consider a sequence of vectors v_{1}; v_{2}; :::; v_{T} , and an update equation of the form w^{(t+1)} = w^{(t)} v_{t} with w^{(1)} = 0. Show that:





T
w^{(t)} w^{?}; v
jjw
?
2
+
T
jj
v ^{2}
X_{t}
X
(5)
h
ti_{2}
=1
2
_{t}_{=1} jj tjj





(3 points) Now let’s start putting things together and analyze the convergence rate of gradient descent i.e. how fast it converges to w?.
First, show that for w = ^{1} P^{T} w(t)

t=1





1
T
X_{t}
f(w) f(w^{?}) _{T}
(6)
hw^{(t)} w^{?}; rf(w^{(t)})i
=1




Next, use the result from part 2, with upper bounds B and for jjw^{?}jj and rf(w^{(t)})
Let ( ) denote the standard sigmoid function. Now, for the following vector function:

f_{1}
(w_{1}
_{;} _{w}_{2}_{) = }_{e}^{e}^{w1} ^{+e}^{2w2} _{+} _{(e}^{w}1 _{+} _{e}^{2w}2 _{)}
(12)
f_{2}
(w_{1}
; w_{2}) = w_{1}w_{2} + max(w_{1}; w_{2})
(13)
(a) Draw the computation graph. Compute the value of f at w~ = (1;
1).
~
(b) At this w~, compute the Jacobian _{@}@_{w}f_{~} using numerical diﬀerentiation (using w = 0.01).
(c) At this w~, compute the Jacobian using forward mode autodiﬀerentiation.
(d) At this w~, compute the Jacobian using backward mode autodiﬀerentiation.
(e) Don’t you love that software exists to do this for us?

Paper Review
The first of our paper reviews for this course comes from a much acclaimed spotlight presentation at NeurIPS 2019 on the topic ‘Weight Agnostic Neural Networks’ by Adam Gaier and David Ha from Google Brain.
The paper presents a very interesting proposition that, through a series of experiments, reexamines some fundamental notions about neural networks – in particular, the comparative importance of architectures and weights in a network’s predictive performance.
The paper can be viewed here. The authors have also written a blog post with intuitive visualizations to help understand its key concepts better.
Guidelines: Please restrict your reviews to no more than 350 words. The evaluation rubric for this section is as follows :

(2 points) What is the main contribution of this paper? Briefly summarize its key insights, strengths and weaknesses.

(2 points) What is your personal takeaway from this paper? This could be expressed either in terms of relating the approaches adopted in this paper to your traditional understanding of learning parameterized models, or potential future directions of research in the area which the authors haven’t addressed, or anything else that struck you as being noteworthy.

Implement and train a network on CIFAR10
Setup Instructions: Before attempting this question, look at setup instructions at here.

(Upto 29 points) Now, we will learn how to implement a softmax classifier, vanilla neural networks (or MultiLayer Perceptrons), and ConvNets. You will begin by writing the forward and backward passes for diﬀerent types of layers (including convolution and pooling), and then go on to train a shallow ConvNet on the CIFAR10 dataset in Python. Next you will learn to use PyTorch, a popular opensource deep learning framework, and use it to replicate the experiments from before.
Follow the instructions provided here
4