Discussions: Fall 2019
This Semester's Plan
We’re working on filling this out!Be sure to check individual meeting times – as we occasionally have to stray from the schedule!
Planned Meetings
Welcome to the Intelligence Group!
Read more
Welcome to the Intelligence group! This meeting, we'll be discerning everyone's research interests, whether passive or active. Following that, we'll start narrowing paper topics to fill the 5 unplanned meetings so everyone can begin getting a feel for the breadth of computation as a field of research.
A Few Useful Things to Know About Machine Learning
Read more
Machine learning algorithms can figure out how to perform important tasks by generalizing from examples. This is often feasible and cost-effective where manual programming is not. As more data becomes available, more ambitious problems can be tackled. As a result, machine learning is widely used in computer science and other fields. However, developing successful machine learning applications requires a substantial amount of black art that is hard to find in textbooks. This article summarizes twelve key lessons that machine learning researchers and practitioners have learned. These include pitfalls to avoid, important issues to focus on, and answers to common questions.
Deep Learning
Read more
Abstract: Deep learning allows for computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state- of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
Speech Recognition With Deep Recurrent Neural Networks
Read more
Our first non-review paper of the semester will be on using Deep RNNs to perform speech recognition tasks. This approach seeks to combine the advantages of deep neural networks wtih the "flexible use of long-range context that empowers RNNs". The abstract is rather lengthy, so I'll refrain from copying it here. Our weekly meeting on this paper will go over questions from the paper, strategies for reading more complex research papers, and how to identify strengths and weaknesses of journal articles.
Attention Is All You Need
Read more
This paper, published from work performed at Google Brain and Google Research, proposes a new network architecture for tackling machine translation problems (among other ML transduction problems). This new approach simplifies the classic approach to translation while also achieving better performance. Accompanying the paper is a Jupyter notebook created at Harvard to add annotations to the original article while also supplying code mentioned in the work. This paper is most similar to the kinds of articles you can expect to be reading when doing original research.
: an Applied Trolly Problem?
Read more
This week, we're shifting focus slightly to look at ethics within the field of artifical intelligence. Ethics are an important consideration for anyone interested in the field of AI. This particular paper focuses on one of the largest debates in the current AI ethics field, accident algorithms in self-driving cars. In the event a self-driving car realizes an accident is about to occur, what should it do? What outcomes should be prioritized? The paper reviews the current viewpoints on these questions.
How to Grow a Mind
Read more
In coming to understand the world-in learning concepts, acquiring language, and grasping causal relations-our minds make inferences that appear to go far beyond the data available. How do we do it? This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems. Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address some of the deepest questions about the nature and origins of human thought: How does abstract knowledge guide learning and reasoning from sparse data? What forms does our knowledge take, across different domains and tasks? And how is that abstract knowledge itself acquired?
Experimental Investigation of Ant Traffic Under Crowded Conditions
Read more
This week will be a short break from our NLP/CogSci papers. Ants are one of the few creatures on the planet that engage in two-way traffic just like us. By looking at how ants navigate their self-organized traffic systems, we can learn how to better organize our own homologous systems (such as intersections, roadways, etc.). This paper experimentally investigates the efficiency of ants navigating paths involving bidirectional movement, and found that ants are capable of a level of efficiency that is twice as high as humans' in equivalent scenarios. What makes ants so much better than humans at traffic organization? What can we learn from ants' organizational paradigms? Should ants be driving our cars instead of humans? These are some of the questions investigated in this week's paper.
Building Machines That Learn and Think Like People
Read more
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
Building Machines That Learn and Think for Themselves
Read more
We agree with Lake and colleagues on their list of key ingredients for building humanlike intelligence, including the idea that model-based reasoning is essential. However, we favor an approach that centers on one additional ingredient: autonomy. In particular, we aim toward agents that can both build and exploit their own internal models, with minimal human hand-engineering. We believe an approach centered on autonomous learning has the greatest chance of success as we scale toward real-world complexity, tackling domains for which ready-made formal models are not available. Here we survey several important examples of the progress that has been made toward building autonomous agents with humanlike abilities, and highlight some outstanding challenges.