Jump to: Navigation
Projects
Project, Research desription
2019
- CMU10703 - Maximum Entropy Inverse RL, Adversarial imitation learning
- Paper review
- [PAPER-Review] End-to-End Training of Deep Visuomotor Policies, Sergey, Finn, 2016
- Robotics
- optimization
- Underactuated Robotics - Introduction
- [PAPER-Review] Learning From Demonstration
- [PAPER-Review] Algorithms for Inverse Reinforcement learning, Andrew Y.Ng , Russell, 2000
- [Robot sensor application] Basic concept
- Robot Sensor
- A First course in Probability: Chapter3 조건부 확률과 독립
- First course Chap3
- A First course in Probability: Chapter2 확률의 공리
2018
- [CS294 - 112 정리] Lecture13 - Learning Policies by Imitating Other Policies
- Cs294 13
- [CS294 - 112 정리] Lecture12 - Advanced Model Learning and Images
- [CS294 - 112 정리] Lecture11 - Model-Based Reinforcement Learning
- Cs294 11
- [CS294 - 112 정리] Lecture10 - Optimal control and planning
- Cs294 10
- [CS294 - 112 정리] Lecture5 - Policy Gradients Introduction
- [CS294 - 112 정리] Lecture4 - Reinforcement Learning Introduction
- Cs294 4
- Cs294 3
- Cs294
- [CS294 - 112 정리] Lecture2 - Supervised Learning and Imitation
- Cs294 2
- Cs294
- [CS294 - 112 정리] Lecture1 - Introduction and Course Overview
- Digital Motion Control System: Lecture1- Motion Profile
- Digital Motion Control System: Lecture1- Review
- Convex optimization
- Machine learning
- Model Predictive Control
- Modern Control
- Robot Dynamics & control: Lecture 6 - Dynamics
- Robot Dynamics & control: Lecture 5 - Velocity Kinematics - The Manipulator Jacobian
- Robot Dynamics & control: Lecture 4 - Inverse Kinematics
- Robot Dynamics & control: Lecture 3 - Forward Kinematics: The Denavit-Hartenberg Convention
- Robot Dynamics & control: Lecture 2 - Rigid Motions and Homogeneous Transforms
- Robot Dynamics & control: Lecture 1 - Introduction
- Week1 - Motivation and Basics