machine learning

A Few Useful Things to Know About Machine Learning

Read more

This week for our first paper we'll be reading A Few Useful Things to Know about Machine Learning. This will be a useful review paper to help get everyone familiar with the field of machine learning and is full of useful vocabulary and conventional outlooks on AI as a whole.

Welcome Back to SIGAI, Featuring Gradient Descent

Read more

Welcome back to SIGAI! We'll be covering some administrative needs – like how we're doing lectures/workshops and what we expect of coordinators, since we'll have elections in March. Then we'll go over some math to check everyone's background so you're uber prepared for next week! Once we've covered that, we'll go over Gradient Descent and get a rough idea of how it works – this is integral to almost all our content this semester.

Endless Content in Video Games Using Machine Learning

Read more

This weeks paper we're going over a super fun application of ML in Video Games! Well see how AI/ML techniques can help enhance games like Legend of Zelda and provide limitless engaging content

: Getting Started With Neural Networks

Read more

You've heard about them: Beating humans at all types of games, driving cars, and recommending your next Netflix series to watch, but what ARE neural networks? In this lecture, you'll actually learn step by step how neural networks function and how they learn. Then, you'll deploy one yourself!

: an Intro to Neural Nets

Read more

UPDATE: We've partnered with TechKnights to throw a lecture+workshop combo during KnightHacks! To finish Unit 0 for the Fall series, we're following up our lecture last week with a workshop. Here, we'll build a neural network to classify hand-written digits using a popular dataset, MNIST, with some help from Google's Tensorflow library. ***Everything will be provided in a self-contained environment for you but you will need to come prepared with the below requirements before the workshop begins.

Deconstucting Buzzwords

Read more

Tonight we'll be encouraging you not to drink the KoolAid and work to demystify common terms surrounding Artificial Intelligence. Afterwards we'll have a Q/A (AMA) session where members can ask anything they want about the club and its leadership. During our Q/A you'll have the opportunity to vote up questions you want answered the most.

Neural Networks vs Brains

Read more

This week in AI@UCF we're discussing some of the strengths and weaknesses of our brain versus neural networks. We'll cover how machine learning currently compares to our minds and why it's easy to get excited about both!

: Deconvoluting Convolutional Neural Networks

Read more

We're filling this out!

: Convolving a Neural Network

Read more

We're filling this out!

Machine Learning Applications

Read more

It's time to put what you have learned into action. Here, we have prepared some datasets for you to build a a model to solve. This is different from past meetings, as it will be a full workshop. We provide the data sets and a notebook that gets you started, but it is up to you to build a model to solve the problem. So, what will you be doing? We have two datasets, one is using planetary data to predict if a planet is an exoplanet or not, so your model can help us find more Earth-like planets that could contain life! The second dataset will be used to build a model that mimics a pokedex! Well, not fully, but the goal is to predict the name of a pokemon and also predict its type (such as electric, fire, etc.) This will be extremely fun and give you a chance to apply what you have learned, with us here to help!

How to Grow a Mind

Read more

In coming to understand the world-in learning concepts, acquiring language, and grasping causal relations-our minds make inferences that appear to go far beyond the data available. How do we do it? This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems. Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address some of the deepest questions about the nature and origins of human thought: How does abstract knowledge guide learning and reasoning from sparse data? What forms does our knowledge take, across different domains and tasks? And how is that abstract knowledge itself acquired?

Training Machines to Learn From Experience

Read more

We all remember when DeepMind’s AlphaGo beat Lee Sedol, but what actually made the program powerful enough to outperform an international champion? In this lecture, we’ll dive into the mechanics of reinforcement learning and its applications.

Learning by Doing, This Time With Neural Networks

Read more

It's easy enough to navigate a 16x16 maze with tables and some dynamic programming, but how exactly do we extend that to play video games with millions of pixels as input, or board games like Go with more states than particals in the observable universe? The answer, as it often is, is deep reinforcement learning.

Building AI, the Human Way

Read more

We've learned about linear and statistical models as well as different training paradigms, but we've yet to think about how it all began. In Cognitive Computational Neuroscience, we look at AI and ML from the perspective of using them as tools to learn about human cognition, in the hopes of building better AI systems, but more importantly, in the hopes of better understanding ourselves.

Building AI, the Human Way

Read more

We've learned about linear and statistical models as well as different training paradigms, but we've yet to think about how it all began. In Cognitive Computational Neuroscience, we look at AI and ML from the perspective of using them as tools to learn about human cognition, in the hopes of building better AI systems, but more importantly, in the hopes of better understanding ourselves.

Home

Read more