A Discussion between Yann LeCun and Christopher Manning
Earlier this month, I had the exciting opportunity to moderate a discussion between Professors Yann LeCun and Christopher Manning, titled “What innate priors should we build into the architecture of deep learning systems?” The event was a special installment of AI Salon, a discussion series held within the Stanford AI Lab that often features expert guests.
Part Two: Interpretability and Attention
This is the second of a two-part post in which I describe four broad research trends that I observed at ACL 2017. In Part One I explored the shifting assumptions we make about language, both at the sentence and the word level, and how these shifts are prompting both a comeback of linguistic structure and a re-evaluation of word embeddings.
In this part, I will discuss two more very inter-related themes: interpretability and attention.
Part One: Linguistic Structure and Word Embeddings
In this two-part post, I describe four broad research trends that I observed at the conference (and its co-located events) through papers, presentations and discussions. The content is guided entirely by my own research interests; accordingly it’s mostly focused on deep learning, sequence-to-sequence models, and adjacent topics. This first part will explore two inter-related themes: linguistic structure and word representations.