: an Applied Trolly Problem?

Other Meetings in this Series

Read more

This paper, published from work performed at Google Brain and Google Research, proposes a new network architecture for tackling machine translation problems (among other ML transduction problems). This new approach simplifies the classic approach to translation while also achieving better performance. Accompanying the paper is a Jupyter notebook created at Harvard to add annotations to the original article while also supplying code mentioned in the work. This paper is most similar to the kinds of articles you can expect to be reading when doing original research.

Read more

In coming to understand the world-in learning concepts, acquiring language, and grasping causal relations-our minds make inferences that appear to go far beyond the data available. How do we do it? This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems. Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address some of the deepest questions about the nature and origins of human thought: How does abstract knowledge guide learning and reasoning from sparse data? What forms does our knowledge take, across different domains and tasks? And how is that abstract knowledge itself acquired?

Contributing Authors

John Muchovej
John Muchovej

Founder of AI@UCF. Researcher in cognitive science and machine learning. Focusing on intuitive physics and intuitive psychology.