Difference between revisions of "System Architecture Guide"
(Created page with "Introduction The DREAM software system comprises three main sub-systems, corresponding to work-packages WP4 (Sensing and Interpretation), WP5 (Child Behaviour Analysis) and W...") |
(→Inter-connectivity between the three components) |
||
(6 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | Introduction | + | == Introduction == |
+ | |||
+ | This guide provides an overview the DREAM system architecture, as described in Deliverable D3.1. | ||
+ | |||
The DREAM software system comprises three main sub-systems, corresponding to work-packages WP4 (Sensing and Interpretation), WP5 (Child Behaviour Analysis) and WP6 (Robot Behaviour). Initially, these three sub-systems are implemented by three place-holder components, as follows. | The DREAM software system comprises three main sub-systems, corresponding to work-packages WP4 (Sensing and Interpretation), WP5 (Child Behaviour Analysis) and WP6 (Robot Behaviour). Initially, these three sub-systems are implemented by three place-holder components, as follows. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | Primitives and Ports | + | * <code>sensoryInterpretation</code> |
+ | |||
+ | * <code>childBehaviourClassification</code> | ||
+ | |||
+ | *<code>cognitiveControl</code> | ||
+ | |||
+ | |||
+ | The functionality of each sub-system will be developed incrementally as the project progresses and as new components that implement part of the functionality encapsulated in the place-holder components are developed and integrated into the system. | ||
+ | |||
+ | |||
+ | During integration, white-box testing will be performed on a system-level by removing the driver and stub functions that simulate the output and input of data in the top-level system architecture, i.e. in one of the three components above, allowing that source and sink functionality to be provided instead by the component being integrated. | ||
+ | |||
+ | |||
+ | == Place-holder Component Functionality == | ||
+ | |||
+ | The functionality of <code>sensoryInterpretation</code> is specified completely by the 25 perception primitives defined in Section 2 of Deliverable D1.3 (Child Behaviour Specification). | ||
+ | |||
+ | |||
+ | The functionality of <code>cognitiveControl</code> is specified partially by the seven action primitives defined in Section 2 of Deliverable D1.2 (Robot Behaviour Specification). It is only a partial specification because the basis for invoking each of these action primitives has not yet been defined (whereas, in the case of sensoryInterpretation, all of the primitives are continually invoked to monitor the status of the robot’s environment). | ||
+ | |||
+ | |||
+ | The functionality of <code>childBehaviourClassification</code> is encapsulated by three primitives, as follows (the primitives are defined below). | ||
+ | |||
+ | * <code>getChildBehaviour(state)</code> | ||
+ | * <code>getChildMotivation(degree_of_engagement)</code> | ||
+ | * <code>getChildPerformance(degree_of_performance)</code> | ||
+ | |||
+ | == Primitives and Ports == | ||
+ | |||
+ | The parameters of every primitive in the three sub-systems are explosed by two dedicated ports, one for input and one for output, with the arguments encapsulated in a YARP bottle, a simple but flexible way of sending and receiving messages in YARP (for an example, see the <code>protoComponent</code> port used by the <code>respond()</code> method in <code>protoComponentConfiguration.cpp</code> as described in the DREAM wiki). | ||
+ | |||
+ | |||
+ | The general naming convention for the two ports is <code>/<primitive name>:i</code> for input and <code>/<primitive name>:o</code> for output. | ||
+ | |||
+ | ===The <code>sensoryInterpretation</code> Component=== | ||
+ | |||
+ | The following are the primitives and associated input and output ports in the <code>sensoryInterpretation</code> component. | ||
− | The | + | Note that not all primitives have input parameters. The components for those that do are stateful, i.e. once the associated argument values are set, they remain persistently in that state until reset by another input. |
− | + | ||
− | + | checkMutualGaze() | |
− | + | /sensoryInterpretation/checkMutualGaze:o | |
− | checkMutualGaze() | + | |
− | + | getArmAngle(left_azimuth, elevation, right_azimuth, elevation) | |
− | getArmAngle(left_azimuth, elevation, right_azimuth, elevation) | + | /sensoryInterpretation/getArmAngle:o |
− | + | ||
− | getBody(body_x, y, z) | + | getBody(body_x, y, z) |
− | + | /sensoryInterpretation/getArmAngle:o | |
− | getBodyPose(<joint_i>) | + | |
+ | getBodyPose(<joint_i>) | ||
/sensoryInterpretation/getBodyPose:o | /sensoryInterpretation/getBodyPose:o | ||
− | getEyeGaze(eye, x, y, z) | + | |
− | + | getEyeGaze(eye, x, y, z) | |
− | + | /sensoryInterpretation/getEyeGaze:i | |
− | getEyes(eyeL_x, y, z, eyeR_x, y, z) | + | /sensoryInterpretation/getEyeGaze:o |
− | + | ||
− | getFaces(<x, y, z>) | + | getEyes(eyeL_x, y, z, eyeR_x, y, z) |
− | + | /sensoryInterpretation/getEyes:o | |
− | getGripLocation(object_x, y, z, grip_x, y, z) | + | |
− | + | getFaces(<x, y, z>) | |
− | + | /sensoryInterpretation/getFaces:o | |
− | getHands(<x, y, z>) | + | |
+ | getGripLocation(object_x, y, z, grip_x, y, z) | ||
+ | /sensoryInterpretation/getGripLocation:i | ||
+ | /sensoryInterpretation/getGripLocation:o | ||
+ | |||
+ | getHands(<x, y, z>) | ||
/sensoryInterpretation/getHands:o | /sensoryInterpretation/getHands:o | ||
− | getHead(head_x, y, z) | + | |
+ | getHead(head_x, y, z) | ||
/sensoryInterpretation/getHead:o | /sensoryInterpretation/getHead:o | ||
− | getHeadGaze(<plane_x, y, z>, x, y, z) | + | |
+ | getHeadGaze(<plane_x, y, z>, x, y, z) | ||
/sensoryInterpretation/getHeadGaze:i | /sensoryInterpretation/getHeadGaze:i | ||
/sensoryInterpretation/getHeadGaze:o | /sensoryInterpretation/getHeadGaze:o | ||
− | getHeadGaze(x, y, z) | + | |
+ | getHeadGaze(x, y, z) | ||
/sensoryInterpretation/getHeadGaze:o | /sensoryInterpretation/getHeadGaze:o | ||
− | getObjects(<x, y, z>) | + | |
− | + | getObjects(<x, y, z>) | |
− | /sensoryInterpretation/getObjects | + | /sensoryInterpretation/getObjects |
− | getObjects(centre_x, y, z, radius, <x, y, z>) | + | |
+ | getObjects(centre_x, y, z, radius, <x, y, z>) | ||
/sensoryInterpretation/getObjects:i | /sensoryInterpretation/getObjects:i | ||
/sensoryInterpretation/getObjects:o | /sensoryInterpretation/getObjects:o | ||
− | getObjectTableDistance(object_x, y, z, vertical_distance) | + | |
+ | getObjectTableDistance(object_x, y, z, vertical_distance) | ||
/sensoryInterpretation/getObjectTableDistance:i | /sensoryInterpretation/getObjectTableDistance:i | ||
/sensoryInterpretation/getObjectTableDistance:o | /sensoryInterpretation/getObjectTableDistance:o | ||
− | getSoundDirection(threshold, azimuth, elevation) | + | |
+ | getSoundDirection(threshold, azimuth, elevation) | ||
/sensoryInterpretation/getSoundDirection:i | /sensoryInterpretation/getSoundDirection:i | ||
/sensoryInterpretation/getSoundDirection:o | /sensoryInterpretation/getSoundDirection:o | ||
− | identifyFace(x, y, z, face_id) | + | |
+ | identifyFace(x, y, z, face_id) | ||
/sensoryInterpretation/identifyFace:i | /sensoryInterpretation/identifyFace:i | ||
/sensoryInterpretation/identifyFace:o | /sensoryInterpretation/identifyFace:o | ||
− | identifyFaceExpression(x, y, z, expression_id) | + | |
+ | identifyFaceExpression(x, y, z, expression_id) | ||
/sensoryInterpretation/identifyFaceExpression:i | /sensoryInterpretation/identifyFaceExpression:i | ||
/sensoryInterpretation/identifyFaceExpression:o | /sensoryInterpretation/identifyFaceExpression:o | ||
− | identifyObject(x, y, z, object_id) | + | |
+ | identifyObject(x, y, z, object_id) | ||
/sensoryInterpretation/identifyObject:i | /sensoryInterpretation/identifyObject:i | ||
/sensoryInterpretation/identifyObject:o | /sensoryInterpretation/identifyObject:o | ||
− | identifyTrajectory(<x, y, z, t>, trajectory_descriptor) | + | |
+ | identifyTrajectory(<x, y, z, t>, trajectory_descriptor) | ||
/sensoryInterpretation/identifyTrajectory:i | /sensoryInterpretation/identifyTrajectory:i | ||
/sensoryInterpretation/identifyTrajectory:o | /sensoryInterpretation/identifyTrajectory:o | ||
− | identifyVoice(voice_descriptor) | + | |
+ | identifyVoice(voice_descriptor) | ||
/sensoryInterpretation/identifyVoice:o | /sensoryInterpretation/identifyVoice:o | ||
− | recognizeSpeech(text) | + | |
+ | recognizeSpeech(text) | ||
/sensoryInterpretation/recognizeSpeech:o | /sensoryInterpretation/recognizeSpeech:o | ||
− | trackFace(seed_x, y, z, time_interval, projected_x, y, z) | + | |
+ | trackFace(seed_x, y, z, time_interval, projected_x, y, z) | ||
/sensoryInterpretation/trackFace:i | /sensoryInterpretation/trackFace:i | ||
/sensoryInterpretation/trackFace:o | /sensoryInterpretation/trackFace:o | ||
− | trackHand(seed_x, y, z, time_interval, projected_x, y, z) | + | |
+ | trackHand(seed_x, y, z, time_interval, projected_x, y, z) | ||
/sensoryInterpretation/trackFace:i | /sensoryInterpretation/trackFace:i | ||
/sensoryInterpretation/trackFace:o | /sensoryInterpretation/trackFace:o | ||
− | trackObject(objectDescriptor, seed_x, y, z, time_interval, projected_x, y, z) | + | |
+ | trackObject(objectDescriptor, seed_x, y, z, time_interval, projected_x, y, z) | ||
/sensoryInterpretation/trackObject:i | /sensoryInterpretation/trackObject:i | ||
/sensoryInterpretation/trackObject:o | /sensoryInterpretation/trackObject:o | ||
− | The childBehaviourClassification Component | + | ===The <code>childBehaviourClassification</code> Component=== |
− | The following are the primitives and associated input ports in the childBehaviourClassification | + | |
− | + | The following are the primitives and associated input ports in the <code>childBehaviourClassification</code> component. | |
− | getChildBehaviour(state) | + | |
− | + | getChildBehaviour(state) | |
− | getChildMotivation(degree_of_engagement) | + | /childBehaviourClassification/getChildBehaviour:o |
+ | |||
+ | getChildMotivation(degree_of_engagement) | ||
/childBehaviourClassification/getChildMotivation:o | /childBehaviourClassification/getChildMotivation:o | ||
− | getChildPerformance(degree_of_performance) | + | |
+ | getChildPerformance(degree_of_performance) | ||
/childBehaviourClassification/getChildPerformance:o | /childBehaviourClassification/getChildPerformance:o | ||
− | + | ||
+ | ===The cognitiveControl Component === | ||
+ | |||
The following are the primitives and associated input ports in the cognitiveControl component. | The following are the primitives and associated input ports in the cognitiveControl component. | ||
− | grip() | + | |
+ | grip() | ||
/cognitiveControl/grip:i | /cognitiveControl/grip:i | ||
− | moveHand(handDescriptor, x, y, z, roll) | + | |
+ | moveHand(handDescriptor, x, y, z, roll) | ||
/cognitiveControl/moveHand:i | /cognitiveControl/moveHand:i | ||
− | moveHead (x, y, z) | + | |
+ | moveHead (x, y, z) | ||
/cognitiveControl/moveHead:i | /cognitiveControl/moveHead:i | ||
− | moveSequence(sequenceDescriptor) | + | |
+ | moveSequence(sequenceDescriptor) | ||
/cognitiveControl/moveSequence:i | /cognitiveControl/moveSequence:i | ||
− | moveTorso (x, y, z) | + | |
+ | moveTorso (x, y, z) | ||
/cognitiveControl/moveTorso:i | /cognitiveControl/moveTorso:i | ||
− | release() | + | |
+ | release() | ||
/cognitiveControl/release:i | /cognitiveControl/release:i | ||
− | say(text, tone) | + | |
+ | say(text, tone) | ||
/cognitiveControl/say:i | /cognitiveControl/say:i | ||
− | + | ||
+ | ==Inter-connectivity between the three components== | ||
Any component that need to access the information exposed on the ports associated with a primitive has to have equivalent ports of its own so that the ports can be connected but reversing the input/output designation. Thus, for example, one would connect | Any component that need to access the information exposed on the ports associated with a primitive has to have equivalent ports of its own so that the ports can be connected but reversing the input/output designation. Thus, for example, one would connect | ||
− | |||
− | /sensoryInterpretation/identifyObject:o to /cognitiveController/identifyObject:i. This would allow cognitiveController to send the x, y, and z location of the object to be | + | <code>/sensoryInterpretation/identifyObject:i</code> to <code>/cognitiveController/identifyObject:o</code> |
− | + | ||
+ | and | ||
+ | |||
+ | <code>/sensoryInterpretation/identifyObject:o</code> to <code>/cognitiveController/identifyObject:i</code>. | ||
+ | |||
+ | This would allow <code>cognitiveController</code> to send the x, y, and z location of the object to be identified to <code>sensoryInterpretation</code> and then to receive the identification number of that object from <code>sensoryInterpretation</code> (see definition of <code>identifyObject()</code> in Deliverable D1.3). | ||
+ | |||
+ | |||
Regarding the connectivity between the three components, the following principles apply. | Regarding the connectivity between the three components, the following principles apply. | ||
− | + | ||
+ | * each sensoryInterpretation output port is connected to the counterpart input port in the | ||
cognitiveController and childBehaviourClassification components; | cognitiveController and childBehaviourClassification components; | ||
− | + | ||
− | + | * each <code>sensoryInterpretation</code> input port is connected to the counterpart output port in the <code>cognitiveController</code> component (but not the <code>childBehaviourClassification</code> component); | |
− | + | ||
− | Child Behaviour Analysis Primitives | + | * each <code>childBehaviourClassification<code> output port is connected to the counterpart input port in the <code>cognitiveController</code> component; |
− | getChildBehaviour(state) | + | |
− | This primitive classifies the | + | * each <code>cognitiveController</code> input port is, typically, not connected to any counterpart output port in either the <code>sensoryInterpretation</code> or <code>childBehaviourClassification</code> components since these ports will be typically be used only internally within the components that will constitute the <code>cognitiveController</code> as it is developed. |
− | getChildMotivation(degree_of_engagement) | + | |
− | This primitive determines the degree of motivation and engagement on the basis of the temporal | + | ==Child Behaviour Analysis Primitives== |
− | getChildPerformance(degree_of_performance) | + | |
− | This primitive determines the degree of performance of the child on the basis of a temporal sequence of child behaviour states, | + | ===getChildBehaviour(state)=== |
+ | This primitive classifies the child's behaviour on the basis of current percepts, producing an integer output representing the child’s state at that moment. | ||
+ | |||
+ | |||
+ | ===getChildMotivation(degree_of_engagement)=== | ||
+ | This primitive determines the degree of motivation and engagement on the basis of the temporal sequence of child behaviour states, quantifying the extent the children are motivated to participate in the tasks with the robot and detect in particular when their attention is lost. The degree_of_engagement is a real number. | ||
+ | |||
+ | |||
+ | ===getChildPerformance(degree_of_performance)=== | ||
+ | This primitive determines the degree of performance of the child on the basis of a temporal sequence of child behaviour states, quantifying the performance of the children in the therapeutic sessions. The degree_of_performance is a real number. |
Latest revision as of 09:04, 1 September 2014
Contents
Introduction
This guide provides an overview the DREAM system architecture, as described in Deliverable D3.1.
The DREAM software system comprises three main sub-systems, corresponding to work-packages WP4 (Sensing and Interpretation), WP5 (Child Behaviour Analysis) and WP6 (Robot Behaviour). Initially, these three sub-systems are implemented by three place-holder components, as follows.
-
sensoryInterpretation
-
childBehaviourClassification
cognitiveControl
The functionality of each sub-system will be developed incrementally as the project progresses and as new components that implement part of the functionality encapsulated in the place-holder components are developed and integrated into the system.
During integration, white-box testing will be performed on a system-level by removing the driver and stub functions that simulate the output and input of data in the top-level system architecture, i.e. in one of the three components above, allowing that source and sink functionality to be provided instead by the component being integrated.
Place-holder Component Functionality
The functionality of sensoryInterpretation
is specified completely by the 25 perception primitives defined in Section 2 of Deliverable D1.3 (Child Behaviour Specification).
The functionality of cognitiveControl
is specified partially by the seven action primitives defined in Section 2 of Deliverable D1.2 (Robot Behaviour Specification). It is only a partial specification because the basis for invoking each of these action primitives has not yet been defined (whereas, in the case of sensoryInterpretation, all of the primitives are continually invoked to monitor the status of the robot’s environment).
The functionality of childBehaviourClassification
is encapsulated by three primitives, as follows (the primitives are defined below).
-
getChildBehaviour(state)
-
getChildMotivation(degree_of_engagement)
-
getChildPerformance(degree_of_performance)
Primitives and Ports
The parameters of every primitive in the three sub-systems are explosed by two dedicated ports, one for input and one for output, with the arguments encapsulated in a YARP bottle, a simple but flexible way of sending and receiving messages in YARP (for an example, see the protoComponent
port used by the respond()
method in protoComponentConfiguration.cpp
as described in the DREAM wiki).
The general naming convention for the two ports is /<primitive name>:i
for input and /<primitive name>:o
for output.
The sensoryInterpretation
Component
The following are the primitives and associated input and output ports in the sensoryInterpretation
component.
Note that not all primitives have input parameters. The components for those that do are stateful, i.e. once the associated argument values are set, they remain persistently in that state until reset by another input.
checkMutualGaze() /sensoryInterpretation/checkMutualGaze:o getArmAngle(left_azimuth, elevation, right_azimuth, elevation) /sensoryInterpretation/getArmAngle:o getBody(body_x, y, z) /sensoryInterpretation/getArmAngle:o getBodyPose(<joint_i>) /sensoryInterpretation/getBodyPose:o getEyeGaze(eye, x, y, z) /sensoryInterpretation/getEyeGaze:i /sensoryInterpretation/getEyeGaze:o getEyes(eyeL_x, y, z, eyeR_x, y, z) /sensoryInterpretation/getEyes:o getFaces(<x, y, z>) /sensoryInterpretation/getFaces:o getGripLocation(object_x, y, z, grip_x, y, z) /sensoryInterpretation/getGripLocation:i /sensoryInterpretation/getGripLocation:o getHands(<x, y, z>) /sensoryInterpretation/getHands:o getHead(head_x, y, z) /sensoryInterpretation/getHead:o getHeadGaze(<plane_x, y, z>, x, y, z) /sensoryInterpretation/getHeadGaze:i /sensoryInterpretation/getHeadGaze:o getHeadGaze(x, y, z) /sensoryInterpretation/getHeadGaze:o getObjects(<x, y, z>) /sensoryInterpretation/getObjects getObjects(centre_x, y, z, radius, <x, y, z>) /sensoryInterpretation/getObjects:i /sensoryInterpretation/getObjects:o getObjectTableDistance(object_x, y, z, vertical_distance) /sensoryInterpretation/getObjectTableDistance:i /sensoryInterpretation/getObjectTableDistance:o getSoundDirection(threshold, azimuth, elevation) /sensoryInterpretation/getSoundDirection:i /sensoryInterpretation/getSoundDirection:o identifyFace(x, y, z, face_id) /sensoryInterpretation/identifyFace:i /sensoryInterpretation/identifyFace:o identifyFaceExpression(x, y, z, expression_id) /sensoryInterpretation/identifyFaceExpression:i /sensoryInterpretation/identifyFaceExpression:o identifyObject(x, y, z, object_id) /sensoryInterpretation/identifyObject:i /sensoryInterpretation/identifyObject:o identifyTrajectory(<x, y, z, t>, trajectory_descriptor) /sensoryInterpretation/identifyTrajectory:i /sensoryInterpretation/identifyTrajectory:o identifyVoice(voice_descriptor) /sensoryInterpretation/identifyVoice:o recognizeSpeech(text) /sensoryInterpretation/recognizeSpeech:o trackFace(seed_x, y, z, time_interval, projected_x, y, z) /sensoryInterpretation/trackFace:i /sensoryInterpretation/trackFace:o trackHand(seed_x, y, z, time_interval, projected_x, y, z) /sensoryInterpretation/trackFace:i /sensoryInterpretation/trackFace:o trackObject(objectDescriptor, seed_x, y, z, time_interval, projected_x, y, z) /sensoryInterpretation/trackObject:i /sensoryInterpretation/trackObject:o
The childBehaviourClassification
Component
The following are the primitives and associated input ports in the childBehaviourClassification
component.
getChildBehaviour(state) /childBehaviourClassification/getChildBehaviour:o getChildMotivation(degree_of_engagement) /childBehaviourClassification/getChildMotivation:o getChildPerformance(degree_of_performance) /childBehaviourClassification/getChildPerformance:o
The cognitiveControl Component
The following are the primitives and associated input ports in the cognitiveControl component.
grip() /cognitiveControl/grip:i moveHand(handDescriptor, x, y, z, roll) /cognitiveControl/moveHand:i moveHead (x, y, z) /cognitiveControl/moveHead:i moveSequence(sequenceDescriptor) /cognitiveControl/moveSequence:i moveTorso (x, y, z) /cognitiveControl/moveTorso:i release() /cognitiveControl/release:i say(text, tone) /cognitiveControl/say:i
Inter-connectivity between the three components
Any component that need to access the information exposed on the ports associated with a primitive has to have equivalent ports of its own so that the ports can be connected but reversing the input/output designation. Thus, for example, one would connect
/sensoryInterpretation/identifyObject:i
to /cognitiveController/identifyObject:o
and
/sensoryInterpretation/identifyObject:o
to /cognitiveController/identifyObject:i
.
This would allow cognitiveController
to send the x, y, and z location of the object to be identified to sensoryInterpretation
and then to receive the identification number of that object from sensoryInterpretation
(see definition of identifyObject()
in Deliverable D1.3).
Regarding the connectivity between the three components, the following principles apply.
- each sensoryInterpretation output port is connected to the counterpart input port in the
cognitiveController and childBehaviourClassification components;
- each
sensoryInterpretation
input port is connected to the counterpart output port in thecognitiveController
component (but not thechildBehaviourClassification
component);
- each
childBehaviourClassification<code> output port is connected to the counterpart input port in the <code>cognitiveController
component;
- each
cognitiveController
input port is, typically, not connected to any counterpart output port in either thesensoryInterpretation
orchildBehaviourClassification
components since these ports will be typically be used only internally within the components that will constitute thecognitiveController
as it is developed.
Child Behaviour Analysis Primitives
getChildBehaviour(state)
This primitive classifies the child's behaviour on the basis of current percepts, producing an integer output representing the child’s state at that moment.
getChildMotivation(degree_of_engagement)
This primitive determines the degree of motivation and engagement on the basis of the temporal sequence of child behaviour states, quantifying the extent the children are motivated to participate in the tasks with the robot and detect in particular when their attention is lost. The degree_of_engagement is a real number.
getChildPerformance(degree_of_performance)
This primitive determines the degree of performance of the child on the basis of a temporal sequence of child behaviour states, quantifying the performance of the children in the therapeutic sessions. The degree_of_performance is a real number.