[CSDM] Fwd: [Theory-Read] FW: ORFE Colloquium: Yann LeCun, Today at 4:30pm, Sherrerd Hall 101
Avi Wigderson
avi at ias.edu
Tue Apr 11 09:47:10 EDT 2017
Should be interesting!
________________________________________
From: Wilks Statistics Seminar [wilks-seminar at Princeton.EDU] on behalf of Carol Smith [carols at PRINCETON.EDU]
Sent: Tuesday, April 11, 2017 8:50 AM
To: wilks-seminar at Princeton.EDU
Subject: ORFE Colloquium: Yann LeCun, Today at 4:30pm, Sherrerd Hall 101
=== ORFE Colloquium Announcement ===
DATE: Today, April 11, 2017
TIME: 4:30pm
LOCATION: Sherrerd Hall, room 101
SPEAKER: Yann LeCun, Facebook AI Research & New York University
TITLE: Deep Learning and Obstacles to AI, Mathematical and Otherwise
ABSTRACT: Deep learning is at the root of revolutionary progress in
visual and auditory perception by computers, and is pushing the state of
the art in natural language understanding, dialog systems and language
translation. Deep learning systems are deployed everywhere from
self-driving cars to social networks content filtering to search engines
ranking and medical image analysis. A deep learning system is typically
an "almost" differentiable function, composed of multiple highly
non-linear steps, parametrized by a numerical vector with 10^7 to 10^9
dimensions, and whose evaluation of one sample requires 10^9 to 10^10
numerical operations. Training such a system consists in optimizing a
highly non-convex objective averaged over millions of training samples
using a stochastic gradient optimization procedure. How can that
possibly work? The fact that it does work very well is one of the
theoretical puzzles of deep learning. For example, the accuracy and the
learning speed both get better as the systems capacity is increased.
Why? A more important puzzle is how to train large neural networks under
uncertainty. This is key to enabling unsupervised learning and
model-based reinforcement learning where the machine is trained from raw
natural data, without human-supplied labels. A class of methods called
adversarial training is currently our favorite method to approach this
problem. This may be the key to building learning system that learn how
the world works by observation.
BIO: Yann LeCun is Director of AI Research at Facebook and Silver
Professor at New York University, affiliated with the Courant Institute,
the Center for Neural Science and the Center for Data Science, for which
he served as founding director until 2014. He received an EE Diploma
from ESIEE (Paris) in 1983, a PhD in Computer Science from Université
Pierre et Marie Curie (Paris) in 1987. After a postdoc at the University
of Toronto, he joined AT&T Bell Laboratories. He became head of the
Image Processing Research Department at AT&T Labs-Research in 1996, and
joined NYU in 2003 after a short tenure at the NEC Research Institute.
In late 2013, LeCun became Director of AI Research at Facebook, while
remaining on the NYU Faculty part-time. He was visiting professor at
Collège de France in 2016. His research interests include machine
learning and artificial intelligence, with applications to computer
vision, natural language understanding, robotics, and computational
neuroscience. He is best known for his work in deep learning and the
invention of the convolutional network method which is widely used for
image, video and speech recognition. He is a member of the US National
Academy of Engineering, the recipient of the 2014 IEEE Neural Network
Pioneer Award, the 2015 IEEE Pattern Analysis and Machine Intelligence
Distinguished Researcher Award, the 2016 Lovie Award for Lifetime
Achievement, and a honorary doctorate from IPN, Mexico.
_______________________________________________
Theory-Read mailing list
Theory-Read at lists.cs.princeton.edu
https://lists.cs.princeton.edu/mailman/listinfo/theory-read
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://imap.math.ias.edu/pipermail/csdm/attachments/20170411/da8abf86/attachment.html>
More information about the csdm
mailing list