Reading group schedule

Enter information about group topics, organizers, time/place

Generative Adversarial Nets (Wednesday 2-3pm)

We (Dan Roy and myself) wish to run a reading group on Generative Adversarial Nets (GAN). The goal would be to understand what those nets are, what applications they might be useful for, and to come up with formal definitions for the setup and objective(s).

We will expect the attendees to have read the GAN original paper (see below) as well as to be familiar with AdaBoost (a link to a short book chapter presenting it is provided below).

At this point we wish to discuss two further papers, the AdaGAN paper and the Training Generative Networks, both linked below.

We will hold the first meeting on Wednesday next week (Feb 8th), so that people have a chance to review these papers.

References:

Cheers,

Shai and Dan

The Bug Machine and KinPix (Th 2-9, 2-3pm in the tea area)

Two basic open problems that will get you thinking in new ways

  • One is about the benefits of diversity, group learning, and certain algorithms that employ electro shocks
  • The other is about a GAN like game that your grandmother would like to play

Manfred

On the Benefits of Sleeping for Neural Nets (11am-noon, Fr. 2-10, room 116)

We will be looking at an idea called sleeping or specialists. Sleeping was used to efficiently implement long-term memory effects in the context of online learning for experts. We will review the idea and brainstorm about applications to neural nets, which currently cannot sleep.

We will explain the basics from scratch, so no need to prepare. Just come along for the ride.
There will be fun and some substance. We have a mind-boggling Bayesian interpretation.

References

At a later point we discuss how this might be related to e.g. LSTMs

Cheers,

Wouter and Manfred

Working group on Learning to Model Structures and Data (2:00 - 3:30, Tu Feb 21 & Th Feb 23, room 116, Russell)

A theme that cuts across many domains of computer science and mathematics is to find simple representations of complex mathematical objects such as graphs, functions, or distributions on data. These representations need to capture how the object interacts with a class of tests, and to approximately determine the outcome of these tests.

For example, in machine learning, the object might be.a distribution on data points, high dimensional real vectors, and the tests might be half-spaces. The goal would be to learn a simple representation of the data that determines the probability of any half-space or possibly intersections of half spaces. In computational complexity, the object might be a Boolean function or distribution on strings, and the tests are functions of low circuit complexity. In graph theory, the object is a large graph, and the tests are the cuts In the graph; the representation should determine approximately the size of any cut. In additive combinatorics, the object might be a function or distribution over an Abelian group, and the tests might be correlations with linear functions or polynomials.

The focus of the working group is to understand the common elements that underlie results in all of these areas, to use the connections between them to make existential results algorithmic, and to then use algorithmic versions of these results for new purposes. For example, can we use boosting, a technique from supervised learning, in an unsupervised context? Can we characterize the pseudo-entropy of distributions, a concept arising in cryptography? Do the properties of dense graphs “relativize” to sub-graphs of expanders? See the Learning to Model Structures and Data page for more notes and references. Below is a list of papers with results we will cover.

  • Klivans and Servedio, Boosting and Hard-core Sets, FOCS 99.

  • Omer Reingold, Luca Trevisan, Madhur Tulsiani, Salil P. Vadhan: Dense Subsets of Pseudorandom Sets. FOCS 2008: 76-85

  • Luca Trevisan, Madhur Tulsiani, Salil P. Vadhan: Regularity, Boosting, and Efficiently Simulating Every High-Entropy Distribution. IEEE Conference on Computational Complexity 2009: 126-136

  • Russell Impagliazzo, Algorithmic Dense Model Theorems and Weak Regularity (unpublished)

  • Sita Gakkhar, Russell Impagliazzo, Valentine Kabanets: Hardcore Measures, Dense Models and Low Complexity Approximations (unpublished)