Obstacle Avoidance Using Omnidirectional Vision Robot


In the previous article on obstacle avoidance (see Obstacle Avoidance in the Real World), our robot used simple sonar and infrared sensors to detect the presence of obstacles, then adjusted its heading accordingly. As we saw, it takes quite a few such sensors to handle all the possible obstacle configurations the robot might encounter. And the reaction of the robot to the presence of an obstacle is always fairly simple: just turn left, right or back up.

We will now try a different approach using our omnidirectional vision system. The most obvious advantage of 360-degree vision is that you can see your entire surroundings in a single snapshot. We can think of each pixel in the image as a potential obstacle detector analogous to our sonar and infrared sensors but now we have thousands of them instead of just five or six. Of course, these pixel values don't tell us the distance to objects like the active range sensors do, but as we shall see, we can still develop an effective navigation algorithm nonetheless.

The image below is a typical view through the omnidirectional vision setup. The camera's resolution is set to 320x240 pixels and we can see the reflection of the room in the spherical mirror, as well as the ceiling past the edges of the mirror.

Using the RoboRealm vision program, we can "unwrap" this image using the Polar module, then crop the artifacts to produce a rectangular panoramic image as shown below:




The resulting rectangular image is now 280x86 pixels, and we must remember that the left edge of the image represents the same place in space as the right edge. In other words, the topology of the rectangular strip is that of a loop with straight ahead at the mid point and straight behind at the left and right edges.

When looking at such an image, it is obvious to our own eyes where the floor is clear and where there are obstacles. But how do we extract that information from the image so our robot can see the same thing? The folks over at RoboRealm have an awesome tutorial on how to do just this. I would highly recommend checking it out athttp://www.roborealm.com/tutorial/Obstacle_Avoidance/slide010.php. What follows is simply an elaboration of the methods described in that tutorial and applied to 360-degree images.
One method that works well is based on edge detection. The image below shows the result of applying RoboRealm's Prewitt edge filter to the original image:




Note how objects are now outlined by their edges whereas the floor is generally a featureless black. In particular, we can see that even the relatively narrow support legs of the chair stand out nicely—something that is often difficult to "see" with infrared or sonar. There is also a false edge beneath the large ball on the left which is due to the ball's shadow. We can eliminate such artifacts by using the Auto Threshold and Clean modules which results in the following image:




Now here comes the key step. RoboRealm has a module called Side Fill that can be applied to any of the four edges of the image. In this case we will apply it to the bottom edge. The Side Fill module paints white pixels upward from the bottom edge until it runs into a white pixel already in the image. Because we are working with a thresholded edge map, this coincides with the first potential obstacle in that vertical slice of the image. The picture below shows the result:



To eliminate the narrow spikes, we use the Erode module, then smooth the edges with theSmooth Hull module resulting in our final image:



This image gives us a rough map of the clear areas on the floor. This becomes clearer if we superimpose the image with our original as follows:



Since the image is panoramic, each point along the horizontal axis represents a possible direction of travel for the robot with straight ahead being at the mid point and straight back corresponding to the left/right edges. In this case, we can see that the safest direction to go is about 90 degrees off to the left.
But how can we get a more precise heading from this image? In the original RoboRealm tutorial, the horizontal coordinate of the highest peak is used. However, in this case that would correspond to the narrow peak on the right which is too narrow for our robot to pass. Instead, we want a gap that is both deep and wide. Unfortunately, there isn't an easy way to get this information directly from RoboRealm. We need a way to label the points along the boundary of our blob, and then analyze these points to give us the best gap. RoboRealm does help us get started by using the Harris Corners module. This module looks for "corner" points in an image and can also be used to trace the boundary of our floor blob. Applying it to our floor map image yields the following set of Harris points:



The Harris Corners are represented by the green points in the image above and correspond to those points where the contour has a noticeable "kink". Some of these kinks are rather subtle and we don't care so much about the kinks themselves but simply the fact that the collection of points nicely traces the contour of our floor map. RoboRealm can return the coordinates of these points as an array to our controlling program where we can analyze them further. The next picture shows the Harris points superimposed on our original image:



This image illustrates the power of visual obstacle avoidance over our previous approach using sonar and infrared sensors. From a single snap shot (or frame of a video) we are able to get a 360-degree view of our surroundings with obstacles nicely marked by Harris Corner points. And even though we don't know the distances to these points, we can usually assume that points higher up in the image are further away.
The final step involves using the array of Harris Corners to determine the best direction for our robot to move. The procedure described here looks for a space that is wide enough for our robot to pass and deep enough to allow for some significant forward progress. For example, the big space on the left of the picture would be a good candidate. The smaller space on the right would also allow some movement but we would quickly run up against some obstacles. To assess these spaces we proceed as follows.
Start at the left edge of the picture and move toward the right, checking each Harris Corner as we go. Set the left boundary of our candidate gap to our starting point. If the vertical coordinate of the current Harris Corner is greater than some threshold, then continue on to the next point. If it falls below the threshold, assume we have found the right boundary of the current gap, store the width and average depth of the current gap, and start again moving toward the right.
Applying this algorithm to the image above yields two gaps. The one on the left has its mid point at 95 degrees to the left, a width of 123 pixels, and an average depth of 48 pixels. The gap on the right is located at 114 degrees right, is 86 pixels wide and 28 pixels deep. In this case, the clear winner is the gap on the left. Our control algorithm rotates the robot 95 degrees left, then heads straight. As the robot moves across the floor, the analysis is repeated for each frame of the video at a rate of about 10 frames per second. If a set of Harris Corners directly ahead appears below our cutoff threshold toward the bottom of the frame, we make a course adjustment toward the middle of a new gap.
When the path ahead is clear, the placement of obstacles and Harris points will shift as our robot moves forward. We therefore also program the robot to adjust its heading toward the middle of the current gap as it is moving so that it adapts to the changing shape of the gap from one position to the next.
The following series of video clips demonstrates this algorithm in action. All of the robot's sonar and infrared sensors are turned off so that only vision is used for navigation. Let's see how well it does when confronted with a floor cluttered with various objects:
Notice how well the robot avoids even the thin chair legs which are often very difficult to detect with fixed sonar and infrared sensors. (A panning range sensor is much better in this regard.)
The next video shows the view from the robot just after the edge filter is applied. Here you can see how nicely objects like chair legs stand out in the image:
The last video shows the normal view from the robot, including an overlay of the Harris points indicating the positions of obstacles:
In conclusion, we have seen that a simple home made omnidirectional vision system can be used to highlight obstacles in the path of the robot. The robot can then take evasive action in the direction of the spaces or gaps between obstacles. No other sensors are required, although combining the visual system with sonar and infrared sensors would yield an even more robust navigation algorithm.
Hardware List

In case some of you are interested in the hardware used in the robot featured in this tutorial, here is a list:

No comments:

Post a Comment