<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=us-ascii"><meta name=Generator content="Microsoft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0in;
        margin-bottom:.0001pt;
        font-size:11.0pt;
        font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:#0563C1;
        text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
        {mso-style-priority:99;
        color:#954F72;
        text-decoration:underline;}
pre
        {mso-style-priority:99;
        mso-style-link:"HTML Preformatted Char";
        margin:0in;
        margin-bottom:.0001pt;
        font-size:10.0pt;
        font-family:"Courier New";}
span.EmailStyle17
        {mso-style-type:personal-compose;
        font-family:"Calibri",sans-serif;
        color:windowtext;}
span.HTMLPreformattedChar
        {mso-style-name:"HTML Preformatted Char";
        mso-style-priority:99;
        mso-style-link:"HTML Preformatted";
        font-family:"Courier New";}
.MsoChpDefault
        {mso-style-type:export-only;
        font-family:"Calibri",sans-serif;}
@page WordSection1
        {size:8.5in 11.0in;
        margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
        {page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]--></head><body lang=EN-US link="#0563C1" vlink="#954F72"><div class=WordSection1><pre><span style='font-size:11.0pt;font-family:"Times New Roman",serif'>INSTITUTE FOR ADVANCED STUDY<o:p></o:p></span></pre><pre><span style='font-size:11.0pt;font-family:"Times New Roman",serif'>School of Mathematics<o:p></o:p></span></pre><pre><span style='font-size:11.0pt;font-family:"Times New Roman",serif'>Princeton, NJ 08540<o:p></o:p></span></pre><pre><span style='font-size:11.0pt;font-family:"Times New Roman",serif'><o:p> </o:p></span></pre><pre><b><span style='font-size:11.0pt;font-family:"Times New Roman",serif'>Members' Seminar<o:p></o:p></span></b></pre><pre><b><span style='font-size:11.0pt;font-family:"Times New Roman",serif'>Monday, April 2<o:p></o:p></span></b></pre><pre><span style='font-size:11.0pt;font-family:"Times New Roman",serif'><o:p> </o:p></span></pre><pre><span style='font-size:11.0pt;font-family:"Times New Roman",serif'><o:p> </o:p></span></pre><pre><span style='font-size:11.0pt;font-family:"Times New Roman",serif'>Topic: On Expressiveness and Optimization in Deep Learning<o:p></o:p></span></pre><pre><span style='font-size:11.0pt;font-family:"Times New Roman",serif'>Speaker: Nadav Cohen, Member, School of Mathematics<o:p></o:p></span></pre><pre><span style='font-size:11.0pt;font-family:"Times New Roman",serif'>Time/Room: 2:00pm - 3:00pm/Simonyi Hall 101<o:p></o:p></span></pre><pre><span style='font-size:11.0pt;font-family:"Times New Roman",serif'>Abstract Link: <a href="http://www.math.ias.edu/seminars/abstract?event=129359">http://www.math.ias.edu/seminars/abstract?event=129359</a><o:p></o:p></span></pre><p class=MsoNormal><span style='font-family:"Times New Roman",serif'><o:p> </o:p></span></p><p class=MsoNormal><span style='font-family:"Times New Roman",serif'>Three fundamental factors determine the quality of a statistical learning algorithm: expressiveness, optimization and generalization. The classic strategy for handling these factors is relatively well understood. In contrast, the radically different approach of deep learning, which in the last few years has revolutionized the world of artificial intelligence, is shrouded by mystery. This talk will describe a series of works aimed at unraveling some of the mysteries behind expressiveness and optimization. I will begin by establishing an equivalence between convolutional networks - the most successful deep learning architecture to date, and hierarchical tensor decompositions. The equivalence will be used to answer various questions concerning the expressiveness of convolutional networks. I will then turn to discuss recent work analyzing optimization of deep linear networks. Surprisingly, in stark contrast with conventional wisdom, we find that depth, despite its non-convex nature, can accelerate optimization.<o:p></o:p></span></p><p class=MsoNormal><span style='font-family:"Times New Roman",serif'><o:p> </o:p></span></p><p class=MsoNormal><span style='font-family:"Times New Roman",serif'><o:p> </o:p></span></p><p class=MsoNormal><span style='font-family:"Times New Roman",serif'><o:p> </o:p></span></p><p class=MsoNormal><span style='font-family:"Times New Roman",serif'><o:p> </o:p></span></p><p class=MsoNormal><span style='font-family:"Times New Roman",serif'><o:p> </o:p></span></p><p class=MsoNormal><span style='font-family:"Times New Roman",serif'>-----------------------------------------<o:p></o:p></span></p><p class=MsoNormal><span style='font-family:"Times New Roman",serif'><a href="http://www.math.ias.edu/seminars">http://www.math.ias.edu/seminars</a><o:p></o:p></span></p></div></body></html>