Speech Recognition With Deep Recurrent Neural Networks
Other Meetings in this Series
Abstract: Deep learning allows for computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state- of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
This paper, published from work performed at Google Brain and Google Research, proposes a new network architecture for tackling machine translation problems (among other ML transduction problems). This new approach simplifies the classic approach to translation while also achieving better performance. Accompanying the paper is a Jupyter notebook created at Harvard to add annotations to the original article while also supplying code mentioned in the work. This paper is most similar to the kinds of articles you can expect to be reading when doing original research.