Artificial Intelligence Reinforcement Learning In Python

Complete guide to Reinforcement Learning, with Stock Trading and Online Advertising Applications

Last updated 2022-01-10 | 4.7

- Apply gradient-based supervised machine learning methods to reinforcement learning
- Understand reinforcement learning on a technical level
- Understand the relationship between reinforcement learning and psychology

What you'll learn

Apply gradient-based supervised machine learning methods to reinforcement learning
Understand reinforcement learning on a technical level
Understand the relationship between reinforcement learning and psychology
Implement 17 different reinforcement learning algorithms

* Requirements

* Calculus (derivatives)
* Probability / Markov Models
* Numpy
* Matplotlib
* Beneficial to have experience with at least a few supervised machine learning methods
* Gradient descent
* Good object-oriented programming skills

Description

When people talk about artificial intelligence, they usually don’t mean supervised and unsupervised machine learning.

These tasks are pretty trivial compared to what we think of AIs doing - playing chess and Go, driving cars, and beating video games at a superhuman level.

Reinforcement learning has recently become popular for doing all of that and more.

Much like deep learning, a lot of the theory was discovered in the 70s and 80s but it hasn’t been until recently that we’ve been able to observe first hand the amazing results that are possible.

In 2016 we saw Google’s AlphaGo beat the world Champion in Go.

We saw AIs playing video games like Doom and Super Mario.

Self-driving cars have started driving on real roads with other drivers and even carrying passengers (Uber), all without human assistance.

If that sounds amazing, brace yourself for the future because the law of accelerating returns dictates that this progress is only going to continue to increase exponentially.

Learning about supervised and unsupervised machine learning is no small feat. To date I have over TWENTY FIVE (25!) courses just on those topics alone.

And yet reinforcement learning opens up a whole new world. As you’ll learn in this course, the reinforcement learning paradigm is very from both supervised and unsupervised learning.

It’s led to new and amazing insights both in behavioral psychology and neuroscience. As you’ll learn in this course, there are many analogous processes when it comes to teaching an agent and teaching an animal or even a human. It’s the closest thing we have so far to a true artificial general intelligence.  What’s covered in this course?

  • The multi-armed bandit problem and the explore-exploit dilemma

  • Ways to calculate means and moving averages and their relationship to stochastic gradient descent

  • Markov Decision Processes (MDPs)

  • Dynamic Programming

  • Monte Carlo

  • Temporal Difference (TD) Learning (Q-Learning and SARSA)

  • Approximation Methods (i.e. how to plug in a deep neural network or other differentiable model into your RL algorithm)

  • How to use OpenAI Gym, with zero code changes

  • Project: Apply Q-Learning to build a stock trading bot

If you’re ready to take on a brand new challenge, and learn about AI techniques that you’ve never seen before in traditional supervised machine learning, unsupervised machine learning, or even deep learning, then this course is for you.

See you in class!


"If you can't implement it, you don't understand it"

  • Or as the great physicist Richard Feynman said: "What I cannot create, I do not understand".

  • My courses are the ONLY courses where you will learn how to implement machine learning algorithms from scratch

  • Other courses will teach you how to plug in your data into a library, but do you really need help with 3 lines of code?

  • After doing the same thing with 10 datasets, you realize you didn't learn 10 things. You learned 1 thing, and just repeated the same 3 lines of code 10 times...


Suggested Prerequisites:

  • Calculus

  • Probability

  • Object-oriented programming

  • Python coding: if/else, loops, lists, dicts, sets

  • Numpy coding: matrix and vector operations

  • Linear regression

  • Gradient descent


WHAT ORDER SHOULD I TAKE YOUR COURSES IN?:

  • Check out the lecture "Machine Learning and AI Prerequisite Roadmap" (available in the FAQ of any of my courses, including the free Numpy course)

Who this course is for:

  • Anyone who wants to learn about artificial intelligence, data science, machine learning, and deep learning
  • Both students and professionals

Course content

14 sections • 110 lectures

Introduction Preview 03:14

Course Outline and Big Picture Preview 07:55

Where to get the Code Preview 04:36

How to Succeed in this Course Preview 05:51

Warmup Preview 15:36

Section Introduction: The Explore-Exploit Dilemma Preview 10:17

Applications of the Explore-Exploit Dilemma Preview 08:00

Epsilon-Greedy Theory Preview 07:04

Calculating a Sample Mean (pt 1) Preview 05:56

Epsilon-Greedy Beginner's Exercise Prompt Preview 05:05

Designing Your Bandit Program Preview 04:09

Epsilon-Greedy in Code Preview 07:12

Comparing Different Epsilons Preview 06:02

Optimistic Initial Values Theory Preview 05:40

Optimistic Initial Values Beginner's Exercise Prompt Preview 02:26

Optimistic Initial Values Code Preview 04:18

UCB1 Theory Preview 14:32

UCB1 Beginner's Exercise Prompt Preview 02:14

UCB1 Code Preview 03:28

Bayesian Bandits / Thompson Sampling Theory (pt 1) Preview 12:43

Bayesian Bandits / Thompson Sampling Theory (pt 2) Preview 17:35

Thompson Sampling Beginner's Exercise Prompt Preview 02:50

Thompson Sampling Code Preview 05:03

Thompson Sampling With Gaussian Reward Theory Preview 11:24

Thompson Sampling With Gaussian Reward Code Preview 06:18

Why don't we just use a library? Preview 05:40

Nonstationary Bandits Preview 07:11

Bandit Summary, Real Data, and Online Learning Preview 06:29

(Optional) Alternative Bandit Designs Preview 10:05

Suggestion Box Preview 03:03

What is Reinforcement Learning? Preview 08:08

From Bandits to Full Reinforcement Learning Preview 08:42

MDP Section Introduction Preview 06:19

Gridworld Preview 12:35

Choosing Rewards Preview 03:58

The Markov Property Preview 06:12

Markov Decision Processes (MDPs) Preview 14:42

Future Rewards Preview 09:34

Value Functions Preview 05:07

The Bellman Equation (pt 1) Preview 08:46

The Bellman Equation (pt 2) Preview 06:42

The Bellman Equation (pt 3) Preview 06:09

Bellman Examples Preview 22:24

Optimal Policy and Optimal Value Function (pt 1) Preview 09:17

Optimal Policy and Optimal Value Function (pt 2) Preview 04:08

MDP Summary Preview 02:58

Dynamic Programming Section Introduction Preview 08:59

Iterative Policy Evaluation Preview 15:36

Designing Your RL Program Preview 05:00

Gridworld in Code Preview 11:37

Iterative Policy Evaluation in Code Preview 12:17

Windy Gridworld in Code Preview 07:47

Iterative Policy Evaluation for Windy Gridworld in Code Preview 07:14

Policy Improvement Preview 11:23

Policy Iteration Preview 07:57

Policy Iteration in Code Preview 08:27

Policy Iteration in Windy Gridworld Preview 08:50

Value Iteration Preview 07:40

Value Iteration in Code Preview 06:36

Dynamic Programming Summary Preview 04:57

Monte Carlo Intro Preview 09:21

Monte Carlo Policy Evaluation Preview 10:52

Monte Carlo Policy Evaluation in Code Preview 07:52

Monte Carlo Control Preview 09:00

Monte Carlo Control in Code Preview 08:51

Monte Carlo Control without Exploring Starts Preview 04:41

Monte Carlo Control without Exploring Starts in Code Preview 05:40

Monte Carlo Summary Preview 01:53

Temporal Difference Introduction Preview 03:55

TD(0) Prediction Preview 05:24

TD(0) Prediction in Code Preview 04:54

SARSA Preview 04:36

SARSA in Code Preview 06:20

Q Learning Preview 04:55

Q Learning in Code Preview 05:02

TD Learning Section Summary Preview 02:27

Approximation Methods Section Introduction Preview 04:19

Linear Models for Reinforcement Learning Preview 08:32

Feature Engineering Preview 10:16

Approximation Methods for Prediction Preview 09:55

Approximation Methods for Prediction Code Preview 08:26

Approximation Methods for Control Preview 04:41

Approximation Methods for Control Code Preview 08:54

CartPole Preview 05:34

CartPole Code Preview 05:49

Approximation Methods Exercise Preview 04:07

Approximation Methods Section Summary Preview 03:05

This Course vs. RL Book: What's the Difference? Preview 07:10

Beginners, halt! Stop here if you skipped ahead Preview 14:09

Stock Trading Project Section Introduction Preview 05:13

Data and Environment Preview 12:22

How to Model Q for Q-Learning Preview 09:37

Design of the Program Preview 06:45

Code pt 1 Preview 07:59

Code pt 2 Preview 09:40

Code pt 3 Preview 04:28

Code pt 4 Preview 07:17

Stock Trading Project Discussion Preview 03:37

Anaconda Environment Setup Preview 20:20

How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow Preview 17:32

How to Code by Yourself (part 1) Preview 15:54

How to Code by Yourself (part 2) Preview 09:23

Proof that using Jupyter Notebook is the same as not using it Preview 12:29

Python 2 vs Python 3 Preview 04:38

How to Succeed in this Course (Long Version) Preview 10:24

Is this for Beginners or Experts? Academic or Practical? Fast or slow-paced? Preview 22:04

Machine Learning and AI Prerequisite Roadmap (pt 1) Preview 11:18

Machine Learning and AI Prerequisite Roadmap (pt 2) Preview 16:07