Welcome back to SIGAI! š Tonight we'll go over some changes that have happened over the summer, how we'll handle things after moving forward, then dive into our classic first lecture/workshop series, An Intro to Neural Nets. This time, though, we'll go into significantly more depth, historically and mathematically, than we have in the past. See you there!
You've heard about them: Beating humans at all types of games, driving cars, and recommending your next Netflix series to watch, but what ARE neural networks? In this lecture, you'll actually learn step by step how neural networks function and how they learn. Then, you'll deploy one yourself!
Here, we'll dive, head first, into the nitty-gritty of Neural Networks, how they work, what Gradient Descent achieves for them, and how Neural Networks act on the feedback that Gradient Descent derives.
UPDATE: We've partnered with TechKnights to throw a lecture+workshop combo during KnightHacks! To finish Unit 0 for the Fall series, we're following up our lecture last week with a workshop. Here, we'll build a neural network to classify hand-written digits using a popular dataset, MNIST, with some help from Google's Tensorflow library. ***Everything will be provided in a self-contained environment for you but you will need to come prepared with the below requirements before the workshop begins.
You've heard about them: Beating humans at all types of games, driving cars, and recommending your next Netflix series to watch, but what ARE neural networks? In this lecture, you'll actually learn step by step how neural networks function and learn. Then, you'll deploy one yourself!
You've heard about them: Beating humans at all types of games, driving cars, and recommending your next Netflix series to watch, but what ARE neural networks? In this lecture, you'll actually learn step by step how neural networks function and learn. Then, you'll deploy one yourself!
Ever wonder how Facebook can tell you which friends to tag in your photos or how Google automatically makes collages and animations for you? This lecture is all about that: We'll teach you the basics of computer vision using convolutional neural networks so you can make your own algorithm to automatically analyze your visual data!
This week in AI@UCF we're discussing some of the strengths and weaknesses of our brain versus neural networks. We'll cover how machine learning currently compares to our minds and why it's easy to get excited about both!
You know what they are, but "how do?" In this meeting, we let you loose on a dataset to help you apply your newly developed or honed data science skills. Along the way, we go over the importance of visulisations and why it is important to be able to pick apart a dataset.
This lecture is all about Recurrent Neural Networks. These are networks with with added memory, which means they can learn from sequential data such as speech, text, videos, and more. Different types of RNNs and strategies for building them will also be covered. The project will be building a LSTM-RNN to generate new original scripts for the TV series āThe Simpsonsā. Come and find out if our networks can become better writers for the show!
Ever wonder how Facebook tells you which friends to tag in your photos, or how Siri can even understand your request? In this meeting we'll dive into convolutional neural networks and give you all the tools to build smart systems such as these. Join us in learning how we can grant our computers the gifts of hearing and sight!
Some of the hardest aspects of Machine Learning are the details. Almost every algorithm we use is sensitive to "hyperparameters" which affect the initialization, optimization speed, and even the possibility of becoming accurate. We'll cover the general heuristics you can use to figure out what hyperparameters to use, how to find the optimal ones, what you can do to make models more resilient, and the like. This workshop will be pretty "down-in-the-weeds" but will give you a better intuition about Machine Learning and its shortcomings.
This lecture is all about Recurrent Neural Networks. These are networks with memory, which means they can learn from sequential data such as speech, text, videos, and more. Different types of RNNs and strategies for building them will also be covered. The project will be building a LSTM-RNN to generate new original scripts for the TV series āThe Simpsonsā. Come and find out if our networks can become better writers for the show!
Welcome to our second to last lecture for Fall 2017! We will be giving an introduction to neuroevolution, one of the most active subfields in evolutionary computation! We will be covering history, prominent algorithms and frameworks, current state-of-the-art research going on in the field here at UCF, and the recent attention this field has gotten from big names such as Google Brain, DeepMind, FAIR, Uber, MIT, and more!
GANs are relativity new in the machine learning world, but they have proven to be a very powerful model. Recently, they made headlines in the DeepFake network, being able to mimic someone else in real time video and audio. There has also been cycleGAN, which takes one domain (horses) and makes it look like something similar (zebras). Come and learn the secret behind these type of networks, you will be suprised how intuitive it is! The lecture will cover the basics of GANs and different types, with the workshop covering how we can generate human faces, cats, dogs, and other cute creatures!
We all remember when DeepMindās AlphaGo beat Lee Sedol, but what actually made the program powerful enough to outperform an international champion? In this lecture, weāll dive into the mechanics of reinforcement learning and its applications.
It's easy enough to navigate a 16x16 maze with tables and some dynamic programming, but how exactly do we extend that to play video games with millions of pixels as input, or board games like Go with more states than particals in the observable universe? The answer, as it often is, is deep reinforcement learning.