The long term memory through image processing


The term robotics aims towards the development of cognitive robots
which possess certain level of Artificial Intelligence. Scientists have been putting their efforts
to build real time robots that can perform different tasks and solve daily life
problems efficiently. Here cognitive system is referred as a system which
has a tendency to acquire knowledge about environment and perform rational
actions according to the collected information at real time. It will regulate
mental activities of the robot in order to ensure the smooth execution and
synchronization of cognitive cycles at run time. Robot localizes itself in the environment
and generate a map leading towards its destination by recognizing and avoiding
obstacles throughout the path in real time. They store those obstacles in long
term memory through image processing technique for later use.

 As these robots assist in our daily
housekeeping jobs, they are capable of cleaning and moving objects to different
places by performing different actions in real time. For simplification and
making these jobs accessible, there are innumerable middle ways such as ROS,
MIRA and YARP. A cognitive architecture for an intelligent robotic system is required
to support fast perception, recognition and understanding complex contexts and
learning of anonymous behaviors at real time (Burghart
& Mikut).
Every cognitive robotic architecture consists of Simultaneous Localization and
Mapping (SLAM) approach as a basic building component which enables a robot to
localize itself in an alien environment and create a map to move accordingly
and avoid obstacles to reach its selected landmark. This very ability makes
SLAM so enticing to be implemented in cognitive architecture.

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
Writers Experience
Recommended Service
From $13.90 per page
4,6 / 5
Writers Experience
From $20.00 per page
4,5 / 5
Writers Experience
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team


Simultaneous localization and
mapping approach which was developed by Huge Durrant-Whyte and John J.Leonard (Leonard, 1991) and was based on
work done by Smith, Self and Cheeseman (Smith, 1985). This technique
localizes a robot in an unknown environment and generates a map that helps the
robot to navigate from the starting point to the destination by avoiding
hurdles in its path at real time. Slam includes multiple parts i.e.

Extraction of landmark

Data association with a particular landmark

State estimation with respect to that landmark

Update state

Update landmark 

The main purpose of slam is to
update the position of the robot with respect to its environment. As the
odometry of the robot (which gives the robots position) can be mostly erroneous
so to lessen it, laser scans can be used to correct the position of the robot
by extracting features from the environment and re-observing when the robot navigates.
EKF (Extended Kalman Filter) is important part of the SLAM process. It is
responsible for updation of data while robot is changing its position. After
changing every position, the next destination is commonly called landmark. A
record is kept of an estimate of the uncertainty in the position of robot and
landmarks present in the environment. An outline of the SLAM process is given


of the SLAM process

The above stated Slam process
starts with the change in odometry changes and as the robot moves to new
position, uncertainty factor about the new position is updated in EKF using
odometry updates after that landmarks are extracted from the surroundings at the
new position. Now robot tries to associate new landmarks with previously
observed landmarks. Then robots position is updated using re-observed landmarks
in EKF. If landmarks are not previously seen then add them to EKF as new
observations in order to use them later (Zunino, February 2002).

Example of above stated slam

(a)                                             (b)                                          (c)

Figure a, demonstrates us the
process of selecting landmarks. The robot is represented by the triangle, stars
represent landmarks and lightning represents the location of landmarks using
sensors of robot. In figure b, there is an explanation of movement of the robot
and calculating the distance from previous location after selecting landmarks.
Odometry of robot calculates the distance moved by robot from previous
position. In figure c, robot again measures the location of the landmarks using
its sensors from new position but they don’t match so actual and calculated
distances are not same.                            


                                              (d)                                          (e)

Now in figure d, robot uses
sensors to know about where the landmarks actually are to determine its actual
position (dotted triangle represents where robot thinks it is and second is the
actual position of robot). As sensors are not always perfect so the robot
cannot actually know exact position of it. But sensors are more accurate than
to relying only on odometry. Figure e shows that dotted triangle represents
where robot thinks it is whereas dashed triangle is calculated by odometry and
the arrow pointing triangle is where it is actually present (Blas & Riisgaard, 2005). SLAM helps in
localizing the robot in its environment, mapping the path to the destination
simultaneously at real time.