The literature on robot navigation is huge, and summarizing it would be difficult, if not impossible, but I thought I'd provide a few examples of papers you can read on robots that utilize ant-like navigational mechanisms.
- Franz, M.O., Schölkof, B., Mallot, H.A., & Bülthoff, H.H. (1998). Learning view graphs for robot navigation. Autonomous Robots, 5, 111-125.
Abstract: We present a purely vision-based scheme for learning a topological representation of an open environment. The system represents selected places by local views of the surrounding scene, and finds traversable paths between them. The set of recorded views and their connections are combined into a graph model of the environment. To navigate between views connected in the graph, we employ a homing strategy inspired by findings of insect ethology. In robot experiments, we demonstrate that complex visual exploration and navigation tasks can thus be performed without using metric information.
The robots' use of snapshots in this paper is much more complex than that of Cataglyphis, but the principle is roughly the same. It navigates from snapshot to snapshot, matching them as it goes along. The robots use this to form a map-like graph of their environment.
- Schmolke A., Mallot, H.A., (2002). Polarisation compass for robot navigation. In: D. Polani, J. Kim, & T. Martinez (eds.) Fifth German Workshop on Artificial Life, pp. 163-167. Akad. Verl.-Ges. Aka, Berlin.
Abstract: Animals often use a combination of path integration and compass information to keep track of their actual position. Path integration, i.e. the continuous updating of distance and direction of the actual position relative to a start position based on intrinsic motion estimates, is easily obtained. However, errors in proprioceptive measuring accumulate. In contrast, the information about the compass direction can provide the heading estimation with a constant small error. In this paper, the performance of the path integration without external reference is compared with the performance using a polarization compass. As proprioceptive estimate, wheel revolutions were used in both cases. The experiments were carried out using a Khepera miniature robot.
This robot, like Cataglyphis, uses an E-vector compass to determine its heading. The paper compares this method to one that uses proprioceptive information to determine heading, and finds that the E-vector compass fairs better in many ways.
- And finally, the most ant-like robot of them all, because it was specifically designed to navigate like Cataglyphis: Lambrinos, D., Möller, R., Labhart, T., Pfeifer, R., & Wehner, Wehner, R. (2000). A mobile robot employing insect strategies for navigation. Robotics and Autonomous Systems, 30, 39--64.
Abstract: The ability to navigate in a complex environment is crucial for both animals and robots. Many animals use a combination of different strategies to return to significant locations in their environment. For example, the desert ant Cataglyphis is able to explore its desert habitat for hundreds of meters while foraging and return back to its nest precisely and on a straight line. The three main strategies that Cataglyphis is using to accomplish this task are path integration, visual piloting and systematic search. In this study, we use a synthetic methodology to gain additional insights into the navigation behavior of Cataglyphis. Inspired by the insect's navigation system we have developed mechanisms for path integration and visual piloting that were successfully employed on the mobile robot Sahabot 2. One the one hand, the results obtained from these experiments provide support for the underlying biological models. On the other hand, by taking the parsimonious navigation strategies of insects as a guideline, computationally cheap navigation methods for mobile robots are derived from the insights gained in the experiments.
Sahabot2 was designed to navigate like Cataglyphis (notice the last author is Wehner, whom I cited a lot in the ant post; he's doing the most and best work on Cataglyphis navigation these days). It uses an E-vector compass (POL compass, in the the paper), and feeds it into a path-integration module that works like this:
The directional information obtained from the POL-compass was used in a pathintegration mechanism to keep an estimate of the robot's position over time... The position of the robot was calculated as follows:
x(t + &Deltat) = x(t) + cos(θ (t)) v(t) &Deltat
y(t + &Deltat) = y(t) + sin(θ (t)) v(t) &DeltatWhere x(t + &Deltat), y(t + &Deltat), x(t) and y(t) are the x and y coordinates of the robot at time i>t + &Deltat and t, respectively with &Deltat denoting the time step. The velocity of the robotv(t) was estimated from the wheel encoders of the robot. The wheel encoders of the Sahabot 2 are mounted on the axes of the two motors that drive the two front wheels (see figure 5) and give 6300 pulses per wheel revolution, which corresponds to 13 pulses per mm of distance traveled. θ (t) is the estimated orientation of the robot at time t, obtained... from the polarized-light compass [E-vector compass - Chris]... (p. 12 of the pdf file linked above)
In English, that means that the path-integration module in Sahabot 2 worked by continuously feeding the orientation information from the E-vector compass and computing the distance traveled from the input of the "wheel encoders," which essentially compute the number of times the wheels go around. So, like Cataglyphis, its path integration system works by combining E-vector compass information with an odometer that, in ants, uses the number of steps, and in the robot, uses the number of wheel revolutions (since the robot doesn't have legs).
The robot also uses an image matching model to use landmarks, much like Cataglyphis. Just like the ants, it takes a snapshot of a landmark and then attepts to match its curent view with its current view.
- Log in to post comments