<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Hi all,</p>
<p>Perhaps the last mailing list I'll bother you with</p>
<p>theory-read <a class="moz-txt-link-rfc2396E" href="mailto:theory-read@lists.cs.princeton.edu"><theory-read@lists.cs.princeton.edu></a></p>
<p>that I am sure most of you subscribe to already (which in
particular informs of teh PU theore seminar below, but other local
PU TCS stuff as well). And again, Greg Valiant is excellent
person to hear (and they serve lunch...)</p>
<p>Best,</p>
<p>Avi<br>
</p>
<div class="moz-forward-container"><br>
<br>
-------- Forwarded Message --------
<table class="moz-email-headers-table" cellspacing="0"
cellpadding="0" border="0">
<tbody>
<tr>
<th valign="BASELINE" nowrap="nowrap" align="RIGHT">Subject:
</th>
<td>[Theory-Read] Meet Gregory Valiant this Friday!</td>
</tr>
<tr>
<th valign="BASELINE" nowrap="nowrap" align="RIGHT">Date: </th>
<td>Tue, 15 Oct 2019 15:24:24 -0400 (EDT)</td>
</tr>
<tr>
<th valign="BASELINE" nowrap="nowrap" align="RIGHT">From: </th>
<td>Yufei Zheng <a class="moz-txt-link-rfc2396E" href="mailto:yufei@cs.princeton.edu"><yufei@cs.princeton.edu></a></td>
</tr>
<tr>
<th valign="BASELINE" nowrap="nowrap" align="RIGHT">To: </th>
<td>theory-read <a class="moz-txt-link-rfc2396E" href="mailto:theory-read@lists.cs.princeton.edu"><theory-read@lists.cs.princeton.edu></a></td>
</tr>
</tbody>
</table>
<br>
<br>
Hello everyone,<br>
<br>
From this week on I'll take over part of the theory lunch
scheduling from Raghuvansh. <br>
We have Gregory Valiant coming this week for theory lunch! He is
around and will love to meet people. Slots are available from
9am-6pm on Friday. Please let me know what times work for you. We
might have a student session if there's interest.<br>
<br>
Additionally, Gregory would be around for dinner on Friday too!
Let me know if you're interested. <br>
Best,<br>
Yufei Zheng<br>
<br>
<br>
----- Forwarded Message -----<br>
From: "Raghuvansh R. Saxena" <a class="moz-txt-link-rfc2396E" href="mailto:rrsaxena@cs.princeton.edu"><rrsaxena@cs.princeton.edu></a><br>
To: "theory-read" <a class="moz-txt-link-rfc2396E" href="mailto:theory-read@lists.cs.princeton.edu"><theory-read@lists.cs.princeton.edu></a><br>
Cc: "valiant" <a class="moz-txt-link-rfc2396E" href="mailto:valiant@stanford.edu"><valiant@stanford.edu></a><br>
Sent: Tuesday, October 15, 2019 3:00:00 PM<br>
Subject: [Theory-Read] Gregory Valiant @ Theory Lunch this Friday!<br>
<br>
Hello everyone, <br>
We are delighted to have Gregory Valiant @ Theory Lunch this
Friday! <br>
The food will be served at 11:45 am, and the talk will start at 12
pm. Please come early for lunch so that the talk starts on time. <br>
Location of the talk: 194 Nassau, Suite 21 lobby (outside room
251). Location of the lunch: 194 Nassau, Suite 22 Kitchen (outside
room 213). <br>
The details of the talk are below. <br>
See you there, Raghuvansh <br>
Title: New Problems and Perspectives on Learning, Sampling, and
Memory, in the Small Data Regime<br>
<br>
Abstract: I will discuss several new problems related to the
general challenge of understanding what conclusions can be made,
given a dataset that is relatively small in comparison to the
complexity or dimensionality of the underlying distribution from
which it is drawn. In the first setting we consider the problem of
learning a population of Bernoulli (or multinomial) parameters.
This is motivated by the ``federated learning" setting where we
have data from a large number of heterogeneous individuals, who
each supply a very modest amount of data, and ask the extent to
which the number of data sources can compensate for the lack of
data from each source. Second, I will introduce the problem of
data "amplification". Given n independent draws from a
distribution, D, to what extent is it possible to output a set of
m > n datapoints that are indistinguishable from m i.i.d. draws
from D? Curiously, we show that nontrivial amplification is often
possible in the regime where n is too small to learn D to any
nontrivial accuracy. We also discuss connections between this
setting and the challenge of interpreting the behavior of GANs and
other ML/AI systems. Finally (if there is time), I will also
discuss memory/data tradeoffs for regression, with the punchline
that any algorithm that uses a subquadratic amount of memory will
require asymptotically more data than second-order methods to
achieve comparable accuracy. This talk is based on four joint
papers with various subsets of Weihao Kong, Brian Axelrod, Shivam
Garg, Vatsal Sharan, Aaron Sidford, Sham Kakade, and Ramya
Vinayak.<br>
_______________________________________________<br>
Theory-Read mailing list<br>
<a class="moz-txt-link-abbreviated" href="mailto:Theory-Read@lists.cs.princeton.edu">Theory-Read@lists.cs.princeton.edu</a><br>
<a class="moz-txt-link-freetext" href="https://lists.cs.princeton.edu/mailman/listinfo/theory-read">https://lists.cs.princeton.edu/mailman/listinfo/theory-read</a><br>
_______________________________________________<br>
Theory-Read mailing list<br>
<a class="moz-txt-link-abbreviated" href="mailto:Theory-Read@lists.cs.princeton.edu">Theory-Read@lists.cs.princeton.edu</a><br>
<a class="moz-txt-link-freetext" href="https://lists.cs.princeton.edu/mailman/listinfo/theory-read">https://lists.cs.princeton.edu/mailman/listinfo/theory-read</a><br>
</div>
</body>
</html>