<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=us-ascii"><meta name=Generator content="Microsoft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0in;
        margin-bottom:.0001pt;
        font-size:11.0pt;
        font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:#0563C1;
        text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
        {mso-style-priority:99;
        color:#954F72;
        text-decoration:underline;}
span.EmailStyle17
        {mso-style-type:personal-compose;
        font-family:"Calibri",sans-serif;
        color:windowtext;}
.MsoChpDefault
        {mso-style-type:export-only;}
@page WordSection1
        {size:8.5in 11.0in;
        margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
        {page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]--></head><body lang=EN-US link="#0563C1" vlink="#954F72"><div class=WordSection1><p class=MsoNormal>INSTITUTE FOR ADVANCED STUDY<o:p></o:p></p><p class=MsoNormal>School of Mathematics<o:p></o:p></p><p class=MsoNormal>Princeton, NJ 08540<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><b>Theoretical Computer Science/Discrete Math Seminars<o:p></o:p></b></p><p class=MsoNormal><b>Week of December 10, 2018<o:p></o:p></b></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>--------------<o:p></o:p></p><p class=MsoNormal>To view mathematics in titles and abstracts, please click on the talk's link.<o:p></o:p></p><p class=MsoNormal>--------------<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><b>Monday, December 10<o:p></o:p></b></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Computer Science/Discrete Mathematics Seminar I<o:p></o:p></p><p class=MsoNormal>Topic: A matrix expander Chernoff bound<o:p></o:p></p><p class=MsoNormal>Speaker: Ankit Garg, Microsoft Research<o:p></o:p></p><p class=MsoNormal>Time/Room: 11:15am - 12:15pm/Simonyi Hall 101<o:p></o:p></p><p class=MsoNormal><span lang=FR>Abstract Link: </span><a href="http://www.math.ias.edu/seminars/abstract?event=128879"><span lang=FR>http://www.math.ias.edu/seminars/abstract?event=128879</span></a><span lang=FR><o:p></o:p></span></p><p class=MsoNormal><span lang=FR><o:p> </o:p></span></p><p class=MsoNormal><span lang=FR><o:p> </o:p></span></p><p class=MsoNormal><span lang=FR><o:p> </o:p></span></p><p class=MsoNormal><b>Monday, December 10<o:p></o:p></b></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Seminar on Theoretical Machine Learning<o:p></o:p></p><p class=MsoNormal>Topic: On Expressiveness and Optimization in Deep Learning<o:p></o:p></p><p class=MsoNormal>Speaker: Nadav Cohen, Member, School of Mathematics<o:p></o:p></p><p class=MsoNormal>Time/Room: 12:15pm - 1:45pm/White Levy Room<o:p></o:p></p><p class=MsoNormal><span lang=FR>Abstract Link: </span><a href="http://www.math.ias.edu/seminars/abstract?event=139466"><span lang=FR>http://www.math.ias.edu/seminars/abstract?event=139466</span></a><span lang=FR><o:p></o:p></span></p><p class=MsoNormal><span lang=FR><o:p> </o:p></span></p><p class=MsoNormal><span lang=FR><o:p> </o:p></span></p><p class=MsoNormal><span lang=FR><o:p> </o:p></span></p><p class=MsoNormal><b>Tuesday, December 11<o:p></o:p></b></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Computer Science/Discrete Mathematics Seminar II<o:p></o:p></p><p class=MsoNormal>Topic: An invitation to tensor networks<o:p></o:p></p><p class=MsoNormal>Speaker: Michael Walter, University of Amsterdam<o:p></o:p></p><p class=MsoNormal>Time/Room: 10:30am - 12:30pm/Simonyi Hall 101<o:p></o:p></p><p class=MsoNormal><span lang=FR>Abstract Link: </span><a href="http://www.math.ias.edu/seminars/abstract?event=129115"><span lang=FR>http://www.math.ias.edu/seminars/abstract?event=129115</span></a><span lang=FR><o:p></o:p></span></p><p class=MsoNormal><span lang=FR><o:p> </o:p></span></p><p class=MsoNormal>1 A matrix expander Chernoff bound <br> Ankit Garg <br><br><br><o:p></o:p></p><p class=MsoNormal>Chernoff-type bounds study concentration of sums of independent random variables and are extremely useful in various settings. In many settings, the random variables may not be completely independent but only have limited independence. One such setting, which turns out to be useful in derandomization and theoretical computer science, in general, involves random walks on expanders. I will talk about a Chernoff-type bound for sums of matrix-valued random variables sampled via a random walk on an expander. Our proof is based on a new multi-matrix extension of the Golden-Thompson inequality which improves in some ways a recent inequality due to Sutter et al. and may be of independent interest, as well as an adaptation of an argument for the scalar case due to Healy. Based on joint work with Yin Tat Lee, Zhao Song and Nikhil Srivastava.<o:p></o:p></p><p class=MsoNormal><a href="http://www.math.ias.edu/seminars/abstract?event=128879">http://www.math.ias.edu/seminars/abstract?event=128879</a><br><br>2 On Expressiveness and Optimization in Deep Learning <br> Nadav Cohen <br><br><br><o:p></o:p></p><p class=MsoNormal>Understanding deep learning calls for addressing three fundamental questions: expressiveness, optimization and generalization. Expressiveness refers to the ability of compactly sized deep neural networks to represent functions capable of solving real-world problems. Optimization concerns the effectiveness of simple gradient-based algorithms in solving non-convex neural network training programs. Generalization treats the phenomenon of deep learning models not overfitting despite having much more parameters than examples to learn from. This talk will describe a series of works aimed at unraveling some of the mysteries behind expressiveness and optimization. I will begin by establishing an equivalence between convolutional and recurrent networks --- the most successful deep learning architectures to date --- and hierarchical tensor decompositions. The equivalence will be used to answer various questions concerning expressiveness, resulting in new theoretically-backed tools for deep network design. I will then turn to discuss a recent line of work analyzing optimization of deep linear neural networks. By studying the trajectories of gradient descent, we will derive the most general guarantee to date for efficient convergence to global minimum of a gradient-based algorithm training a deep network. Moreover, in stark contrast with conventional wisdom, we will see that sometimes, gradient descent can train a deep linear network faster than a classic linear model. In other words, depth can accelerate optimization, even without any gain in expressiveness, and despite introducing non-convexity to a formerly convex problem. <br><br>The talk will be of a high-level nature, suitable for a diverse crowd. It will cover works done in collaboration with Sanjeev Arora, Noah Golowich, Elad Hazan, Wei Hu, Yoav Levine, Or Sharir, Amnon Shashua, Ronen Tamari and David Yakira.<o:p></o:p></p><p class=MsoNormal><a href="http://www.math.ias.edu/seminars/abstract?event=139466">http://www.math.ias.edu/seminars/abstract?event=139466</a><br><br>3 An invitation to tensor networks <br> Michael Walter <br><br><br><o:p></o:p></p><p class=MsoNormal>Tensor networks describe high-dimensional tensors as the contraction of a network (or graph) of low-dimensional tensors. Many interesting tensor can be succinctly represented in this fashion -- from many-body ground states in quantum physics to the matrix multiplication tensors in algebraic complexity. I will give a mathematical introduction to the formalism, give several examples, and sketch some of the most important results. We will discuss the role of the network, how symmetries are encoded, tensor networks as a computational model, and survey some recent algorithmic results.<o:p></o:p></p><p class=MsoNormal><a href="http://www.math.ias.edu/seminars/abstract?event=129115">http://www.math.ias.edu/seminars/abstract?event=129115</a><br><br>Computer Science/Discrete Math Seminars can be found on our web page:<br><br><a href="http://www.math.ias.edu/csdm">http://www.math.ias.edu/csdm</a><br><a href="http://www.math.ias.edu">http://www.math.ias.edu</a><o:p></o:p></p></div></body></html>