Cognitive Robotics

From David Vernon's Wiki
Revision as of 23:57, 26 November 2016 by Dvernon (Talk | contribs)

Jump to: navigation, search

XX-YYY

Course discipline: TBD

Elective

Units: 12 (could also be run as two seven-week independent 6 unit minis, running consecutively)

Lecture/Lab/Rep hours/week: 3 hours lecture/week, 3 hours lab/week

Semester: Spring

Pre-requisites: Programming skills

Course description:

Cognitive robotics is an emerging discipline that draws on robotics, artificial intelligence, and cognitive science. It often exploits models based on biological cognition.

There are at least two reasons why a cognitive ability is useful in robotics:

  1. It allows the robot to work autonomously in challenging environments, adapting to changes and unforeseen situations, and anticipating outcomes when selecting the actions it will perform.
  2. It facilitates interaction with people. Humans have a strong preference for interaction with other cognitive agents so being able to exhibit a capacity for cognition encourages human robot interaction. Conversely, a cognitive ability provides the robot with the ability to infer the goals and intentions of the person it is interacting with and thereby allows it to do so in safe and helpful manner.

Cognitive robots achieve their goals by perceiving their environment, paying attention to the events that matter, planning what to do, anticipating the outcome of their own actions and the actions of other agents (people and other robots), and learning from the resultant interaction. They deal with the inherent uncertainty of natural environments by continually learning, reasoning, and sharing their knowledge.

A key feature of cognitive robotics is its focus on predictive capabilities to augment and compensate for perceptual capacities. Also, by being able to view the world from someone else’s perspective, a cognitive robot can anticipate that person’s intended actions and needs.

In cognitive robotics, the robot body can be more than just an instrument for physical manipulation or locomotion: it can also be a component of the cognitive process. In the particular case of humanoid robotics, the robot’s physical morphology, kinematics, and dynamics, as well as the environment in which it is operating, can help it to achieve its key characteristic of adaptive anticipatory interaction by mirroring the actions of the person with whom it is interacting.

This course introduces the key elements of cognitive robotics, touching on all of these issues. In doing so, it emphasizes both theory and practice and makes extensive use of physical robots, both a mobile robots and manipulator arms, as well as different sensor technologies including RGB-D cameras.

Learning objectives:

The primary goal of this course is provide students with an intensive treatment of a cross-section of the key elements of robotics, robot vision, AI, and cognitive science. Students will learn about the fundamentals of 2D and 3D visual sensing, focussing on the some essential techniques for mobile robots and robot arms. They will then learn about the kinematics and inverse kinematics of mobile robots, addressing locomotion, mapping, and path planning, as well as robot arm kinematics, manipulation, and programming. Based on these foundations, students will progress quickly to cover the topics that gives cognitive robotics its special focus, including reasoning, cognitive architectures, learning and development, memory, attention, prospection by internal simulation, and social interaction.

Outcomes:

After completing this course, students should be able to:

  • Apply their knowledge of machine vision and robot kinematics to create computer programs that control mobile robots and robot arms, enabling the robots to recognize and manipulate objects and navigate their environments.
  • Explain how a robot can be designed to exhibit cognitive goal-directed behaviour through the integration of computer models of visual attention, reasoning, learning, prospection, and social interaction.
  • Create computer programs that realize limited instances of each of these models.

Content details:

(For a detailed plan of lectures and labs, see Cognitive Robotics Lectures and Labs.)

The course will cover the following topics:

  • Cognitive robotics
  • Robot vision
  • Mobile robots
  • Robot arms
  • Constraint-based reasoning for robotics
  • Cognitive architectures
  • Learning and development
  • Memory and Prospection
  • Internal simulation
  • Visual attention
  • Social interaction

The detailed content for each of these topics follows.

Cognitive robotics

  • Introduction to AI and cognition in robotics.
  • Industrial requirements.
  • Artificial cognitive systems.
  • Cognitivist, emergent, and hybrid paradigms in cognitive science.
  • Autonomy.

Robot vision

  • Optics, sensors, and image formation.
  • Image acquisition.
  • Image filtering.
  • Edge detection.
  • Segmentation.
  • Hough transform: line, circle, and generalized transform; extension to codeword features.
  • Colour-based segmentation.
  • Object recognition.
  • Interest point operators.
  • Gradient orientation histogram - SIFT descriptor.
  • Colour histogram intersection.
  • Haar features, boosting, face detection.
  • Homogeneous coordinates and transformations.
  • Perspective transformation.
  • Camera model and inverse perspective transformation.
  • Stereo vision.
  • Epipolar geometry.
  • Structured light & RGB-D cameras.
  • Visual attention.
  • Plane pop-out.
  • RANSAC.
  • Differential geometry.
  • Surface normals and Gaussian sphere.
  • Point clouds.
  • 3D descriptors.

Mobile robots

  • Differential drive locomotion.
  • Forward and inverse kinematics.
  • Holonomic and non-holonomic constraints.
  • Cozmo mobile robot.
  • Map representation.
  • Probabilistic map-based localization.
  • Landmark-based localization.
  • SLAM: simulataneous localization and mapping.
  • Extended Kalman Filter (EKF) SLAM.
  • Visual SLAM.
  • Particle filter SLAM.
  • Graph search path planning.
  • Potential field path planning.
  • Navigation. Obstacle avoidance.
  • Object search.

Robot arms

  • Homogeneous transformations.
  • Frame-based pose specification.
  • Denavit-Hartenberg specifications.
  • Robot kinematics.
  • Analytic inverse kinematics.
  • Iterative approaches.
  • Kinematic structure learning.
  • Kinematics structure correspondences.
  • Robot manipulation.
  • Frame-based task specification.
  • Vision-based pose estimation.
  • Language-based programming.
  • Programming by demonstration.

Constraint-based reasoning for robotics

  • Constraint satisfaction problems (CSP).
  • Meta-Constraints and Meta-CSP reasoning.
  • Planning and navigation with multiple mobile robots.

Cognitive architectures

  • Role and requirements.
  • Cognitive architecture schemas.
  • Example cognitive architectures including Soar, ACT-R, Clarion, LIDA, and ISAC.
  • CRAM: Cognitive Robot Abstract Machine.
  • CRAM Plan Language (CPL).
  • KnowRob knowledge processing and reasoning.

Learning and development

  • Supervised, unsupervised, and reinforcement learning.
  • Hebbian learning.
  • Predictive sequence learning (PSL).
  • Cognitive development in humans and robots.
  • Value systems for developmental and cognitive robots.

Memory and Prospection

  • Declarative vs. procedural memory
  • Semantic memory
  • Episodic memory

Internal simulation

  • Forward and inverse models.
  • Internal simulation hypothesis.
  • Internal simulation with PSL
  • HAMMER cognitive architecture

Visual attention

  • Visual attention.
  • Spatial attention vs. selective attention
  • Saliency functions.
  • Selective Tuning.
  • Overt attention.
  • Inhibition of return.
  • Habituation.
  • Top-down attention

Social interaction

  • Joint action.
  • Joint attention.
  • Shared intention.
  • Shared goals.
  • Perspective taking.
  • Theory of mind.
  • Action and intention recognition.
  • Learning from demonstration.
  • Humanoid robotics.

Faculty:

David Vernon

Delivery:

Face-to-face

Students assessment:

(To be confirmed)

Lab assignments 50% Mid-term exam 20% Final exam 30%

Robots and sensors:

Orabec Astra RGBD sensor

Anki Cozmo mobile robot

Lymxmotion AL5D Robotic Arm with BotBoarduino interface

Software requirements:

(A complete software installation guide will be provided in due course.)

Homebrew: /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Python: brew install python3

Python: alternative installation of Python 3.5.2 for Mac OS X or Windows. You may need to update Tcl/TK to version 8.5.18.0 (see Python documentation).

Anki Cozmo SDK

OpenCV

Vienna University of Technology Software Tools

BLORT - The Blocks World Robotic Toolbox from Vienna University of Technology.

RGB-D segmentation library from Vienna University of Technology.

V4R Library -The Vision4Robotics library (RGB-D point cloud) from Vienna University of Technology.

Arduino sketch programs for Lynxmotion

Ubuntu

ROS Indigo

TurtleBot simulator

Turtlebot

Protégé OWL Editor

DFKI GmbH ROS stack for active perception

DFKI GmbH 3D SLAM and surface reconstruction

Java 7: sudo apt-get install openjdk-7-jdk

Meta-CSP

GIT: sudo apt-get install git

git clone https://github.com/FedericoPecora/meta-csp-framework.git -b project-template project-template

Eclipse Neon IDE

Imperial College London (ICL) Personal Robotics Lab Software Tools

HAMMER cognitive architecture based on the simulation theory of mind from ICL

Markerless Perspective Taking from ICL

openEASE web-based knowledge service

Recommended reading:

Argall, B. D., Chernova, S., Veloso,M., and Browning, B. (2009). "A survey of robot learning from demonstration". Robotics and Autonomous Systems, 57:469–483, 2009.

Beetz, M., Mösenlechner, L., and Tenorth, M. (2010). "CRAM - A Cognitive Robot Abstract Machine for Everyday Manipulation in Human Environments", IEEE/RSJ International Conference on Intelligent Robots and Systems, 1012-1017.

Billard, A., Calinon, S., Dillmann, R. and Schaal, S. (2008). "Robot programming by demonstration". In Springer Handbook of Robotics, pages 1371–1394.

Billing, E., Hellström, T., and Janlert, L-E. (2011). "Predictive Learning from Demonstration", in ICAART 2010, CCIS 129, Filipe, J., Fred, A., and Sharp, B. (Eds.), pp. 186-200.

Billing, E. Svensson, H., Lowe, R. and Ziemke, T. (2016). "Finding Your Way from the Bed to the Kitchen: Reenacting and Recombining Sensorimotor Episodes Learned from Human Demonstration", Frontiers in Robotics and AI, Vol. 3.

Borji, A. and Itti, L. (2013). "State-of-the-Art in Visual Attention Modeling", IEEE Transactions on Pattern Analysis and Machine intelligence, Vol. 35, No. 1, pp. 185-207.

Cangelosi, A. and Schlesinger, M. (2015). Developmental Robotics: From Babies to Robots. Cambridge, MA: MIT Press.

Chella, A., Kurup, U., Laird, J., Trafton, G., Vinokurov, J., Chandrasekaran, B. (2013). "The Challenge of Robotics for Cognitive Architectures", Proc. 12th International Conference on Cognitive Modelling.

Dechter, R. (2003). Constraint Programming, Morgan Kaufman.

Demiris, Y. and Khadhouri, B. (2006). "Hierarchical attentive multiple models for execution and recognition (HAMMER). Robotics and Autonomous Systems, 54:361–369.

Harmon, M. and Harmon, S. (1997). Reinforcement Learning: A Tutorial

Kragic, D. and Vincze, M. (2010). "Vision for Robotics", Foundation and Trends in Robotics, Vol 1, No 1, pp 1–78.

Lungarella, M., Metta, G., Pfeifer, R. and Sandini, G. (2003). "Developmental Robotics: A Survey", Connection Science, 17, pp. 151-190.

Mansouri, M. and Pecora, F. "More knowledge on the table: Planning with space, time and resources for robots", Proc. IEEE Int. Conf. on Robotics and Automation (ICRA).

Merrick, K (2016). "Value systems for developmental cognitive robotics: a survey", Cognitive Systems Research, in press.]

Paul, R. (1981). Robot Manipulators: Mathematics, Programming, and Control. MIT Press.

Russell, S. and Norvig. P. (2014). Artificial Intelligence: A Modern Approach, Pearson Education.

Sarabia, M., Ros, R. & Demiris, Y. (2011) Towards and open-source social middleware for humanoid robots in Proceedings of the IEEE-RAS International Conference on Humanoid Robots, pp.670-675.

Scheutz, M., Harris, J., Schermerhorn, P. (2013). Systematic Integration of Cognitive and Robotic Architectures, Advances in Cognitive Systems, Vol. 2, pp. 277-296.

Sun, R. and C. L. Giles (2001). "Sequence Learning: From Recognition and Prediction to Sequential Decision Making", IEEE Intelligent Systems and Their Applications, Vol. 16, No. 4, pp. 67-70.

Szeliski, R. (2010). Computer Vision: Algorithms and Applications, Springer.

Vernon, D. (1991). Machine Vision: Automated Visual Inspection and Robot Vision, Prentice-Hall, 1991.

Vernon, D. (2014). Artificial Cognitive Systems - A Primer, MIT Press, 2014. 

Vernon, D., von Hofsten, C. and Fadiga, L. (2016). "Desiderata for Developmental Cognitive Architectures", Biologically Inspired Cognitive Architectures, in press.

Documentation:

Cozmo SDK API

Manchester OWL Tutorial

OpenCV Python Tutorial

Point Cloud Tutorial

Protégé Tutorial

Python Tutorial

ROS Tutorial

HAMMER Tutorial

OpenEase

Acknowledgments:

The syllabus for this course drew inspiration from several sources. These include the following.

  • Course 15-494/694 Cognitive Robotics given by Dave Touretzky at Carnegie Mellon University.
  • Course VO 4.0 376.054 Machine Vision and Cognitive Robotics given by Markus Vincze, Michael Zillich, and Daniel Wolf at Vienna University of Technology.
  • Course IT921F Artificial Cognitive Systems given by David Vernon at the University of Skövde.
  • Tutorial on 3D semantic perception at the Third Örebro University Winter School on Artificial Intelligence and Robotics given by Joachhim Hertzberg and Thomas Wiemann, Osnabrück University, and Martin Günther, DFKI Robotics Innovation Center, Germany.
  • Tutorial on Constraint-based Reasoning at the Third Örebro University Winter School on Artificial Intelligence and Robotics given by Federico Pecora and Masoumeh Mansouri, University of Örebro, Sweden.