recurrent networks

Speech Recognition With Deep Recurrent Neural Networks

Read more

Our first non-review paper of the semester will be on using Deep RNNs to perform speech recognition tasks. This approach seeks to combine the advantages of deep neural networks wtih the "flexible use of long-range context that empowers RNNs". The abstract is rather lengthy, so I'll refrain from copying it here. Our weekly meeting on this paper will go over questions from the paper, strategies for reading more complex research papers, and how to identify strengths and weaknesses of journal articles.

Who Needs Show Writers Nowadays?

Read more

This lecture is all about Recurrent Neural Networks. These are networks with with added memory, which means they can learn from sequential data such as speech, text, videos, and more. Different types of RNNs and strategies for building them will also be covered. The project will be building a LSTM-RNN to generate new original scripts for the TV series “The Simpsons”. Come and find out if our networks can become better writers for the show!

Writer's Block? RNNs Can Help!

Read more

This lecture is all about Recurrent Neural Networks. These are networks with memory, which means they can learn from sequential data such as speech, text, videos, and more. Different types of RNNs and strategies for building them will also be covered. The project will be building a LSTM-RNN to generate new original scripts for the TV series "The Simpsons". Come and find out if our networks can become better writers for the show!

What Makes Deep Learning More of an Art Than a Science?

Read more

Some of the hardest aspects of Machine Learning are the details. Almost every algorithm we use is sensitive to "hyperparameters" which affect the initialization, optimization speed, and even the possibility of becoming accurate. We'll cover the general heuristics you can use to figure out what hyperparameters to use, how to find the optimal ones, what you can do to make models more resilient, and the like. This workshop will be pretty "down-in-the-weeds" but will give you a better intuition about Machine Learning and its shortcomings.

Writer's Block? RNNs Can Help!

Read more

This lecture is all about Recurrent Neural Networks. These are networks with memory, which means they can learn from sequential data such as speech, text, videos, and more. Different types of RNNs and strategies for building them will also be covered. The project will be building a LSTM-RNN to generate new original scripts for the TV series “The Simpsons”. Come and find out if our networks can become better writers for the show!

A Critical Review of Recurrent Neural Networks for Sequence Learning

Read more

Abstract: Countless learning tasks require dealing with sequential data. Image captioning, speech synthesis, and music generation all require that a model produce outputs that are sequences. In other domains, such as time series prediction, video analysis, and musical information retrieval, a model must learn from inputs that are sequences. Interactive tasks, such as translating natural language, engaging in dialogue, and controlling a robot, often demand both capabilities. Recurrent neural networks (RNNs) are connectionist models that capture the dynamics of sequences via cycles in the network of nodes. Unlike standard feedforward neural networks, recurrent networks retain a state that can represent information from an arbitrarily long context window. Although recurrent neural networks have traditionally been difficult to train, and often contain millions of parameters, recent advances in network architectures, optimization techniques, and parallel computation have enabled successful large-scale learning with them. In recent years, systems based on long short-term memory (LSTM) and bidirectional (BRNN) architectures have demonstrated ground-breaking performance on tasks as varied as image captioning, language translation, and handwriting recognition. In this survey, we review and synthesize the research that over the past three decades first yielded and then made practical these powerful learning models. When appropriate, we reconcile conflicting notation and nomenclature. Our goal is to provide a selfcontained explication of the state of the art together with a historical perspective and references to primary research.