The ongoing ICLR 2019 (International Conference on Learning Representations) has brought a pack full of surprises and key specimens of innovation. The conference started on Monday, this week and it’s already the last day today! This article covers the highlights of ICLR 2019 and introduces you to the ongoing research carried out by experts in the field of deep learning, data science, computational biology, machine vision, speech recognition, text understanding, robotics and much more.
The team behind ICLR 2019, invited papers based on Unsupervised objectives for agents, Curiosity and intrinsic motivation, Few shot reinforcement learning, Model-based planning and exploration, Representation learning for planning, Learning unsupervised goal spaces, Unsupervised skill discovery and Evaluation of unsupervised agents.
— Alfredo Canziani @ ICLR (@alfcnz) May 6, 2019
ICLR 2019, sponsored by Google marks the presence of 200 researchers contributing to and learning from the academic research community by presenting papers and posters.
ICLR 2019 Day 1 highlights: Neural network, Algorithmic fairness, AI for social good and much more
— Hanie Sedghi (@HanieSedghi) May 6, 2019
The first day of the conference started with a talk on Highlights of Recent Developments in Algorithmic Fairness by Cynthia Dwork, an American computer scientist at Harvard University. She focused on “group fairness” notions that address the relative treatment of different demographic groups. And she talked on research in the ML community that explores fairness via representations. The investigation of scoring, classifying, ranking, and auditing fairness was also discussed in this talk by Dwork.
Generating high fidelity images with Subscale Pixel Networks and Multidimensional Upscaling
— Nal Kalchbrenner (@NalKalchbrenner) May 6, 2019
Jacob Menick, a senior research engineer at Google, Deep Mind and Nal Kalchbrenner, staff research scientist and co-creator of the Google Brain Amsterdam research lab talked on Generating high fidelity images with Subscale Pixel Networks and Multidimensional Upscaling. They talked about the challenges involved in generating the images and how they address this issue with the help of Subscale Pixel Network (SPN). It is a conditional decoder architecture that helps in generating an image as a sequence of image slices of equal size. They also explained how Multidimensional Upscaling is used to grow an image in both size and depth via intermediate stages corresponding to distinct SPNs.
There were in all 10 workshops conducted on the same day based on AI and deep learning covering topics such as,
- The 2nd Learning from Limited Labeled Data (LLD) Workshop: Representation Learning for Weak Supervision and Beyond
- Deep Reinforcement Learning Meets Structured Prediction
- AI for Social Good
- Debugging Machine Learning Models
The first day also witnessed a few interesting talks on neural networks covering topics such as The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks, How Powerful are Graph Neural Networks? etc. Overall the first day was quite enriching and informative.
ICLR 2019 Day 2 highlights: AI in climate change, Protein structure, adversarial machine learning, CNN models and much more
AI’s role in climate change
— Nataniel Ruiz @ ICLR (@natanielruizg) May 7, 2019
Tuesday, also the second day of the conference, started with an interesting talk on Can Machine Learning Help to Conduct a Planetary Healthcheck? by Emily Shuckburgh, a Climate scientist and deputy head of the Polar Oceans team at the British Antarctic Survey. She talked about the sophisticated numerical models of the Earth’s systems which have been developed so far based on physics, chemistry and biology. She then highlighted a set of “grand challenge” problems and discussed various ways in which Machine Learning is helping to advance our capacity to address these.
Protein structure with a differentiable simulator
On the second day of ICLR 2019, Chris Sander, computational biologist, John Ingraham, Adam J Riesselman, and Debora Marks from Harvard University, talked on Learning protein structure with a differentiable simulator. They about the protein folding problem and their aim to bridge the gap between the expressive capacity of energy functions and the practical capabilities of their simulators by using an unrolled Monte Carlo simulation as a model for data. They also composed a neural energy function with a novel and efficient simulator which is based on Langevin dynamics for building an end-to-end-differentiable model of atomic protein structure given amino acid sequence information. They also discussed certain techniques for stabilizing backpropagation and demonstrated the model’s capacity to make multimodal predictions.
Adversarial Machine Learning
— Nataniel Ruiz @ ICLR (@natanielruizg) May 7, 2019
Day 2 was long and had Ian Goodfellow, a machine learning researcher and inventor of GANs, to talk on Adversarial Machine Learning. He talked about supervised learning works and making machine learning private, getting machine learning to work for new tasks and also reducing the dependency on large amounts of labeled data. He then discussed how the adversarial techniques in machine learning are involved in the latest research frontiers.
Day 2 covered poster presentation and a few talks on Enabling Factorized Piano Music Modeling and Generation with the MAESTRO Dataset, Learning to Remember More with Less Memorization, Learning to Remember More with Less Memorization, etc.
ICLR 2019 Day 3 highlights: GAN, Autonomous learning and much more
Developmental autonomous learning: AI, Cognitive Sciences and Educational Technology
Pierre-Yves Oudeyer (@pyoudeyer) at #iclr2019 TARL workshop on taking the learning progress hypothesis seriously – getting good exploration with simple strategies that sample choices proportionally to how quickly they let you learn. pic.twitter.com/9MsVvPKyeb
— Drew Jaegle @ICLR (@drew_jaegle) May 6, 2019
Day 3 of ICLR 2019 started with Pierre-Yves Oudeyer’s, research director at Inria talk on Developmental Autonomous Learning: AI, Cognitive Sciences and Educational Technology. He presented a research program that focuses on computational modeling of child development and learning mechanisms. He then discussed the several developmental forces that guide exploration in large real-world spaces. He also talked about the models of curiosity-driven autonomous learning that enables machines to sample and explore their own goals and learning strategies. He then explained how these models and techniques can be successfully applied in the domain of educational technologies.
Generating knockoffs for feature selection using Generative Adversarial Networks (GAN)
Another interesting topic on the third day of ICLR 2019 was Generating knockoffs for feature selection using Generative Adversarial Networks (GAN) by James Jordon from Oxford University, Jinsung Yoon from California University, and Mihaela Schaar Professor at UCLA. The experts talked about the Generative Adversarial Networks framework that helps in generating knockoffs with no assumptions on the feature distribution. They also talked about the model they created which consists of 4 networks, a generator, a discriminator, a stability network and a power network. They further demonstrated the capability of their model to perform feature selection.
Followed by few more interesting topics like Deterministic Variational Inference for Robust Bayesian Neural Networks, there were series of poster presentations.
ICLR 2019 Day 4 highlights: Neural networks, RNN, neuro-symbolic concepts and much more
Learning natural language interfaces with neural models
Today’s focus was more on neural models and neuro symbolic concepts. The day started with a talk on Learning natural language interfaces with neural models by Mirella Lapata, a computer scientist. She gave an overview of recent progress on learning natural language interfaces which allow users to interact with various devices and services using everyday language. She also addressed the structured prediction problem of mapping natural language utterances onto machine-interpretable representations. She further outlined the various challenges it poses and described a general modeling framework based on neural networks which tackle these challenges.
Ordered neurons: Integrating tree structures into Recurrent Neural Networks
More #ICLR2019: Tomorrow 11-13, @pengchengyin, @gneubig, @miltos1, Alex Gaunt and me will be presenting work on learning to represent edits (poster #51). This is my favourite of my ICLR'19 papers, because it's just a first step in a new and exciting direction.Come and learn more! pic.twitter.com/faVLDYh0ln
— Marc Brockschmidt (@mmjb86) May 8, 2019
The next interesting talk was on Ordered neurons: Integrating tree structures into Recurrent Neural Networks by Professors Yikang Shen, Aaron Courville and Shawn Tan from Montreal University, and, Alessandro Sordoni, a researcher at Microsoft. In this talk, the experts focused on how they proposed a new RNN unit: ON-LSTM, which achieves good performance on four different tasks including language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference.
The last day of ICLR 2019 was exciting and helped the researchers present their innovations and attendees got a chance to interact with the experts.
To have a complete overview of each of these sessions, you can head over to ICLR’s Facebook page.