deep learning

: Introductions & an Intro to Neural Networks

Read more

Welcome back to SIGAI! šŸ˜ƒ Tonight we'll go over some changes that have happened over the summer, how we'll handle things after moving forward, then dive into our classic first lecture/workshop series, An Intro to Neural Nets. This time, though, we'll go into significantly more depth, historically and mathematically, than we have in the past. See you there!

: Getting Started With Neural Networks

Read more

You've heard about them: Beating humans at all types of games, driving cars, and recommending your next Netflix series to watch, but what ARE neural networks? In this lecture, you'll actually learn step by step how neural networks function and how they learn. Then, you'll deploy one yourself!

: an Intro to Neural Networks (ANNs)

Read more

Here, we'll dive, head first, into the nitty-gritty of Neural Networks, how they work, what Gradient Descent achieves for them, and how Neural Networks act on the feedback that Gradient Descent derives.

: an Intro to Neural Nets

Read more

UPDATE: We've partnered with TechKnights to throw a lecture+workshop combo during KnightHacks! To finish Unit 0 for the Fall series, we're following up our lecture last week with a workshop. Here, we'll build a neural network to classify hand-written digits using a popular dataset, MNIST, with some help from Google's Tensorflow library. ***Everything will be provided in a self-contained environment for you but you will need to come prepared with the below requirements before the workshop begins.

Deconstucting Buzzwords

Read more

Tonight we'll be encouraging you not to drink the KoolAid and work to demystify common terms surrounding Artificial Intelligence. Afterwards we'll have a Q/A (AMA) session where members can ask anything they want about the club and its leadership. During our Q/A you'll have the opportunity to vote up questions you want answered the most.

Introduction to Neural Networks

Read more

You've heard about them: Beating humans at all types of games, driving cars, and recommending your next Netflix series to watch, but what ARE neural networks? In this lecture, you'll actually learn step by step how neural networks function and learn. Then, you'll deploy one yourself!

Deep Learning

Read more

Abstract: Deep learning allows for computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state- of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

How Computers Can See and Other Ways Machines Can Think

Read more

Ever wonder how Facebook can tell you which friends to tag in your photos or how Google automatically makes collages and animations for you? This lecture is all about that: We'll teach you the basics of computer vision using convolutional neural networks so you can make your own algorithm to automatically analyze your visual data!

Machine Learning Applications

Read more

You know what they are, but "how do?" In this meeting, we let you loose on a dataset to help you apply your newly developed or honed data science skills. Along the way, we go over the importance of visulisations and why it is important to be able to pick apart a dataset.

Speech Recognition With Deep Recurrent Neural Networks

Read more

Our first non-review paper of the semester will be on using Deep RNNs to perform speech recognition tasks. This approach seeks to combine the advantages of deep neural networks wtih the "flexible use of long-range context that empowers RNNs". The abstract is rather lengthy, so I'll refrain from copying it here. Our weekly meeting on this paper will go over questions from the paper, strategies for reading more complex research papers, and how to identify strengths and weaknesses of journal articles.

Handwritten Digit Recognition With a Back-Propagation Network

Read more

Abstract: We present an application of back-propagation networks to hand-written digit recognition. Minimal preprocessing of the data was required, but architecture of the network was highly constrained and specifically designed for the task. The input of the network consists of normalized images of isolated digits. The method has 1% error rate and about a 9% reject rate on zipcode digits provided by the US Postal Service.

: Deconvoluting Convolutional Neural Networks

Read more

We're filling this out!

How We Give Our Computers Eyes and Eyes

Read more

Ever wonder how Facebook tells you which friends to tag in your photos, or how Siri can even understand your request? In this meeting we'll dive into convolutional neural networks and give you all the tools to build smart systems such as these. Join us in learning how we can grant our computers the gifts of hearing and sight!

Attention Is All You Need

Read more

This paper, published from work performed at Google Brain and Google Research, proposes a new network architecture for tackling machine translation problems (among other ML transduction problems). This new approach simplifies the classic approach to translation while also achieving better performance. Accompanying the paper is a Jupyter notebook created at Harvard to add annotations to the original article while also supplying code mentioned in the work. This paper is most similar to the kinds of articles you can expect to be reading when doing original research.

: Convolving a Neural Network

Read more

We're filling this out!

Writer's Block? RNNs Can Help!

Read more

This lecture is all about Recurrent Neural Networks. These are networks with memory, which means they can learn from sequential data such as speech, text, videos, and more. Different types of RNNs and strategies for building them will also be covered. The project will be building a LSTM-RNN to generate new original scripts for the TV series ā€œThe Simpsonsā€. Come and find out if our networks can become better writers for the show!

A Critical Review of Recurrent Neural Networks for Sequence Learning

Read more

Abstract: Countless learning tasks require dealing with sequential data. Image captioning, speech synthesis, and music generation all require that a model produce outputs that are sequences. In other domains, such as time series prediction, video analysis, and musical information retrieval, a model must learn from inputs that are sequences. Interactive tasks, such as translating natural language, engaging in dialogue, and controlling a robot, often demand both capabilities. Recurrent neural networks (RNNs) are connectionist models that capture the dynamics of sequences via cycles in the network of nodes. Unlike standard feedforward neural networks, recurrent networks retain a state that can represent information from an arbitrarily long context window. Although recurrent neural networks have traditionally been difficult to train, and often contain millions of parameters, recent advances in network architectures, optimization techniques, and parallel computation have enabled successful large-scale learning with them. In recent years, systems based on long short-term memory (LSTM) and bidirectional (BRNN) architectures have demonstrated ground-breaking performance on tasks as varied as image captioning, language translation, and handwriting recognition. In this survey, we review and synthesize the research that over the past three decades first yielded and then made practical these powerful learning models. When appropriate, we reconcile conflicting notation and nomenclature. Our goal is to provide a selfcontained explication of the state of the art together with a historical perspective and references to primary research.

A Look Behind DeepFake ~ GANs"

Read more

GANs are relativity new in the machine learning world, but they have proven to be a very powerful model. Recently, they made headlines in the DeepFake network, being able to mimic someone else in real time video and audio. There has also been cycleGAN, which takes one domain (horses) and makes it look like something similar (zebras). Come and learn the secret behind these type of networks, you will be suprised how intuitive it is! The lecture will cover the basics of GANs and different types, with the workshop covering how we can generate human faces, cats, dogs, and other cute creatures!

How to Grow a Mind

Read more

In coming to understand the world-in learning concepts, acquiring language, and grasping causal relations-our minds make inferences that appear to go far beyond the data available. How do we do it? This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems. Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address some of the deepest questions about the nature and origins of human thought: How does abstract knowledge guide learning and reasoning from sparse data? What forms does our knowledge take, across different domains and tasks? And how is that abstract knowledge itself acquired?

Deep Visual-Semantic Alignments for Generating Image Descriptions

Read more

Abstract: We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions tolearn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCOdatasets. We then show that the generated descriptions sig outperform retrieval baselines on both full images and on a new dataset of region-level annotations.

Training Machines to Learn From Experience

Read more

We all remember when DeepMindā€™s AlphaGo beat Lee Sedol, but what actually made the program powerful enough to outperform an international champion? In this lecture, weā€™ll dive into the mechanics of reinforcement learning and its applications.

Generative Adversarial Networks

Read more

Abstract: We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through

Learning by Doing, This Time With Neural Networks

Read more

It's easy enough to navigate a 16x16 maze with tables and some dynamic programming, but how exactly do we extend that to play video games with millions of pixels as input, or board games like Go with more states than particals in the observable universe? The answer, as it often is, is deep reinforcement learning.