computational robotics

from tasks to motions

Introduction to Robotics

This course aims to make accessible to students fundamental principles in classical and modern approaches related to robot motion planning and the representation and use of geometric models acquired from sensor data. Two common themes will run throughout the course:

  • How can the robot automatically plan and execute a sequence of motions that avoids collision with obstacles and accomplishes the assigned task?
  • How can the robot use sensor-based information to determine its own state and model the world?
These themes and the related algorithms will be presented in the context of practical applications arising in diverse areas such as mobile systems, navigation and exploration, robot manipulation, computer animation, video games, computational biology, and medicine.

You can work on KUKA youBot, iRobot Creates equipped with Kinect senors, laser-range finders, and wireless cameras.

KUKA youBot iRobot Create Kinect Wireless Camera Laser-Range Finder

Syllabus & Lectures

Project 1: Bug Algorithms

[support code in C++ and Matlab]

  • Implement Bug0, Bug1, and Bug2 algorithms
  • You can work by yourself or with a partner of your own choice
  • Implementation should be done in C/C++ or MATLAB on top of the provided support code

Support Code: C++ in Windows

  • Install Microsoft Visual C++ Express Edition 2010 (it's free)
  • Open command prompt and change to the directory where you have placed BugCPP
  • Run cmake -G "Visual Studio 10"
  • Go to the BugCPP folder and double click on RunBug.sln file. This will open up the Visual Studio 10
  • Build the project by going to Project and selecting the Build option
  • If there are no errors, then everything went fine. You are ready to execute the program
  • To execute the program, from the command prompt, run bin\Debug\RunBug.exe obstacles1.txt 2
  • Note that you can use any of the obstacle files instead of obstacles1.txt
  • The second argument indicates the bug algorithm that you would like to run. Possible values are 0, 1, 2

Support Code: C++ in Linux/Mac OS

  • Linux: Install g++ and freeglut
  • MacOS: Install gcc. The easiest way is to download the Apple Developer Tools from http://connect.apple.com (free, but requires registration). You can also install gcc through a package manager such as MacPorts or Fink
  • Open terminal and change to the directory where you have placed BugCPP
  • Run cmake .
  • Run make
  • If there are no errors, then everything went fine. You are ready to execute the program
  • To execute the program, from the command prompt, run bin/RunBug obstacles1.txt 2
  • Note that you can use any of the obstacle files instead of obstacles1.txt
  • The second argument indicates the bug algorithm that you would like to run. Possible values are 0, 1, 2

Support Code: MATLAB

  • Support code can be found in BugMatlab

Project 2: Potential Field Planning for Rigid Bodies and Manipulator Chains

[support code in C++ and Matlab]

  • Implement potential field planners in the case of rigid bodies and manipulator chains
  • Your implementation should also contain strategies for escaping local minima when the robot gets stuck. One possibility is to considerably increase the repulsive potential in order to push the robot away from the obstacle. Another one is to move in a random direction away from the obstacle. You could also try adding other artificial goal/obstacle points to guide the robot away from the real obstacles and toward the real goal.
  • You can work by yourself or with a partner of your own choice
  • Implementation can be done in C/C++ or MATLAB on top of the provided support code

Environment consists of a robot, a goal region represented as a circle, and several obstacles represented as circles as well. Graphical interface supports the dynamic addition and displacement of obstacles and goal. A new obstacle can be added at any time by pressing down the left button of the mouse. Each obstacle and the goal can be moved around by clicking inside them and holding the left button down while moving the mouse. To run the planner, user needs to press the key 'p' on the keyboard. Pressing 'p' again toggles between running and stopping the planner. User can change the radius of each obstacle and the goal by first pressing the key 'r' and then clicking inside an obstacle/goal, holding the left-button down while moving the mouse. Pressing 'r' again toggles between editing radius or moving the obstacles and the goal.

Implementation Notes: Rigid Body

Simulator provides access to goal center, goal radius, number of obstacles, current robot configuration (robot x, robot y, robot theta), and vertices of the polygon representing the robot. Simulator also computes the closest point on the i-th obstacle for any given point (x, y), see function ClosestPointOnObstacle

When computing the potential, it is recommended that you use each vertex of the polygon robot as a control point.

  • For each control point (xj, yj), you should then
    • compute its Jacobian matrix Jacj
    • compute the workspace gradient (wsgi) to each obstacle [In my implementation, wsgi is zero if the distance from (xj, yj) to the closest point on the i-th obstacle is larger than a small constant. I then scale the gradient in order to ensure that the robot gets pushed away when coming close to the obstacles. You may need to play around in order to figure out by how much to scale the gradient in order to get the desired behavior.]
    • use the Jacobian matrix to transform wsgi into a configuration space gradient csgi, which is then added to the overall configuration space gradient csg
    • compute the workspace gradient to the goal, scale it appropriately, transform it into a configuration space gradient via the Jacobian, and add it to the overall configuration space gradient
  • Now that the overall configuration space gradient has been computed, scale it so that you make a small move in x and y and a small move in theta. Recall that the robot should always move in direction opposite to the gradient.

If using C++ support code, you should implement the function ConfigurationMove in the class RigidBodyPlanner. This function should return RigidBodyMove, which represents a small move (dx, dy, dtheta) by the robot. If using Matlab support code, you should implement the function RigidBodyPlanner.

Implementation Notes: Manipulator Chain

Environment consists of a 2d manipulator chain, a goal region represented as a circle, and several obstacles represented as circles as well. Simulator provides access to goal center, goal radius, number of obstacles, and start and end positions of each link in the chain. Simulator also computes the closest point on the i-th obstacle for any given point (x, y), see function ClosestPointOnObstacle Moreover, simulator also implements Forward Kinematics, see function FK.

When computing the potential, it is recommended that you use the end position of each chain link as a control point. You should follow a similar approach as in the case of rigid bodies. One difference is that in the case of the manipulator chain, the attractive potential should be computed only between the end effector (last point in the chain) and the goal and not between any intermediate links and the goal. In this way, the potential will guide the end effector and not the intermediate links toward the goal. The repulsive potential, as in the case of rigid bodies, should be computed between the end of each link and each obstacle.

Important: You can use an equivalent but easier to compute definition of the Jacobian. In this definition the partial derivative of the forward kinematics of the end position of the j-th joint with respect to the i-th joint angle with i <= j is given as

  • x = -LinkEndY(j) + LinkStartY(i)
  • y = LinkEndX(j) - LinkStartX(i)

If using C++ support code, you should implement the function ConfigurationMove in the class ManipPlanner. This function should compute small moves for each link angle. If using Matlab support code, you should implement the function ManipPlanner.

Support Code: C++

  • Support code can be found in CppRigidBodyPFP and CppManipPFP
  • To compile the code inside each directory use similar steps as in the case of the bug algorithms in project 1
  • The rigid body program can be run on Windows as bin\Debug\Planner.exe robotL.txt and on Linux/MacOS as bin/Planner robotL.txt where the argument is the name of the robot file. You can use one of the provided files or create your own.
  • The manipulator chain program can be run on Windows as bin\Debug\Planner.exe nrLinks linkLength and on Linux/MacOS as bin/Planner nrLinks linkLength where nrLinks is a positive integer representing the number of links and linkLength is a positive real representing the length of each link.

Support Code: MATLAB

  • Support code can be found in MatlabRigidBodyPFP and MatlabManipPFP
  • The rigid body program can be run as RunRigidBodyPlanner('robotL.txt') where the argument is the name of the robot file. You can use one of the provided files or create your own.
  • The manipulator chain program can be run as RunManipPlanner(nrLinks, linkLength) where nrLinks is a positive integer representing the number of links and linkLength is a positive real representing the length of each link.

Project 3: Sampling-based Motion Planning

[support code in C++ and Matlab]

In this project, you will implement several sampling-based motion planners.

  • ExtendTree(vid, sto) (40pts)
    • Function should extend the tree from the vertex with index vid to the state sto
    • Each step should be small, i.e., as specified by distOneStep
    • Each valid intermediate step should be added to the tree
    • Function should stop if an invalid intermediate step is encountered or if the goal is reached
  • ExtendRandom() (15pts)
    • Function should select the random state sto from the goal region with small probability p = 0.1 and from the entire bounding box with probability 1 - p (using the SampleState function)
    • Function should select the vertex vid uniformly at random from all the tree vertices
    • After selections are made, function calls ExtendTree(vid, sto)
  • ExtendRRT() (20pts)
    • Function should select the random state sto as described in ExtendRandom
    • Function should select the vertex vid as the closest tree vertex to sto
    • After selections are made, function calls ExtendTree(vid, sto)
  • ExtendEST() (15pts)
    • Function should select the random state sto as described in ExtendRandom
    • Function should select the vertex vid with probability proportional to a weight defined in terms of the number of children coming out of vertex vid
    • After selections are made, function calls ExtendTree(vid, sto)
  • ExtendMyApproach() (20pts)
    • Your approach should show some originality
    • It should at least outperform ExtendRandom and ExtendEST
  • Written Report with your Approach Description and Experimental Results
    • If not properly done, up to -15pts will be taken off
    • Report should include high-level description of your approach as well as pseudo-code
    • When describing your approach, you should highlight its advantages and disadvantages over the other algorithms
    • Experimental results will be obtained by running each method on each provided scene for at least ten times. If a run takes more than ten minutes, you should stop it. You will then present in a table (for each combination of method and scene) the average time, mean time, and the percentage of solutions

Environment consists of a robot, a goal, and obstacles (all represented as circles). Graphical interface supports the dynamic addition and displacement of obstacles and goal. A new obstacle can be added at any time by pressing down the left-button of the mouse. Each obstacle and the goal can be moved around by clicking inside them and holding the left-button down while moving the mouse. User can change the radius of each obstacle and the goal by first pressing the key 'r' and then clicking inside an obstacle/goal, holding the left-button down while moving the mouse. Pressing 'r' again toggles between editing radius or moving the obstacles and the goal.

First edit the scene and then run the planner. Do not make changes to the scene once the planner has started. Also do not change the initial position of the robot via the graphical interface, but use instead the input text file.

  • ExtendRandom can be run by pressing 1
  • ExtendRRT can be run by pressing 2
  • ExtendEST can be run by pressing 3
  • ExtendMyApproach can be run by pressing 4

Pressing 'p' toggles between running and stopping the planner.

Support Code: C++

  • Support code can be found in CppMP
  • To compile the code inside each directory use similar steps as in the case of the bug algorithms in project 1
  • The rigid body program can be run on Windows as bin\Debug\Planner.exe Scene1.txt and on Linux/MacOS as bin/Planner Scene1.txt where the argument is the name of the input file. You can use one of the provided files or create your own.

Support Code: MATLAB

  • Support code can be found in MatlabMP
  • The program can be run as RunPlanner('Scene1.txt') where the argument is the name of the input file. You can use one of the provided files or create your own.
  • Use the ESC key to terminate program and then close figure window

Final Project

For the final project, you can work on your own or with at most two other students. Each team can select from the list of final projects described below or propose their own.

  • Motion Planning for Robots with Differential Constraints

    Robot motions need to obey physical constraints due to underlying robot dynamics, e.g., a car cannot just move in any direction. To account for physical constraints, robot motions are often expressed as a set of differential equations

    x' = f(x, u)
    where x denotes the state of the robot and u denotes the input control applied to the robot. The state can be thought of as augmented configuration, which describes velocity, steering wheel angle, and other components in addition to the robot position and orientation. The control u is an external input that is applied to the robot to control its motions. For example, car is controlled by applying acceleration and rotational velocity of steering wheel. For the final project, you could implement a sampling-based approach to solve the motion-planning problem in the case of robots with differential constraints.

  • Motion Planning for Multiple Robots

    Given several robots Robot1,...,RobotN, initial configurations qinit1,...,qinitN, and goal configurations qgoal1,...,qgoalN the objective is to compute for each Roboti a path from qiniti to qgoali that avoids collisions not only with the obstacles but also with the other robots. Motion-planning approaches for multiple robots can be divided into three categories: centralized planning, decoupled planning, and prioritized planning.

    • Centralized planning treats the robots as just one robot whose configuration space consists of the composition of the configuration spaces of all the individual robots
    • Decoupled planning first plans paths for each robot from its initial to its goal configuration such that the path avoids collisions with obstacles. When planning for the i-th robot all the other robots are completely ignored. At a second stage, the individual robot paths are coordinated so that collisions among robots are also avoided.
    • Prioritized planning provides an in-between strategy. Similar to decoupled planning, paths are planned for each robot individually. However, when planning the path for the i-th robot, prioritized approaches treat the robots 1,...,i-1 as moving obstacles. The objective is then to plan the path for the i-th robot so that not only avoids collisions with the obstacles but it also avoids collisions with the previous robots which are moving according to their planned paths
    For the final project, you could implement one of the above approaches.
  • Manipulation Planning

    The objective in manipulation planning is to be able for the robot to grasp an object and place it in a different configuration while avoiding collisions with obstacles. Note that as part of the solution it may be necessary for the robot to temporarily lay the object on the ground and re-grasp it at a different position. For the final project, you could implement a sampling-based approach for manipulation-planning problems. In particular, you could implement FuzzyPRM (Nielson, Kavraki: “A two Level Fuzzy PRM for Manipulation Planning”, IROS 2000, pp. 1716–1722), which has been shown to effectively solve manipulation-planning problems.

  • Improved Sampling-based Motion Planners

  • Robot Localization

    In this setting, the objective is to estimate the global pose of the robot based on a model of its motions and sensor measurements. You can use various approaches, such as Kalman filters, Particle filters, etc.