Fig. 1: the guidance visualisation: on the left is the
source image, and on the right is the perspective
warped map of the ground ahead.
Snowtires is a vision-guided robot, so here's a rundown of the the vision system works. The basic idea is really simple: the robot veers away form any objects of a designated colour in front of it. In practice this requires several steps:
- Capture an image of the road ahead (left side of Fig. 1)
- Determine the horizon line in the image (top of green box)
- Determine the perspective transformation from the image to the ground plane.
- Transform the section of the image below the horizon to see the image is though it were viewed orthogonally from above. (Fig. 1 , right side)
- Filter the transformed image to determine if there are any orange pixels (or whatever colour you're avoiding)
- Draw a selection of possible paths onto the image
- Compute the number of orange pixels that are within a set distance of each curve - this will be the curve's penalty.
- Choose the curve that passes over or near the fewest orange pixels
Next comes the transformation. This is really easy. OpenCV has the function GetPerspectiveTransform which takes as inputs the corners of the rectangle in the source image and the corners of the trapezoid in the distorted image, and returns a 3x3 matrix representing the perspective transform in homogeneous coordinates. Next, you feed the matrix and the imge into WarpPerspective, and it deposits the warped image into the map. I draw the trapezoid just to be sure all my math is lining up.
To filter the colour, first convert the map to the HSV (Hue, Saturation, Value) colour space. The hue represents "colour" in the sense of where on the rim of the colour wheel that colour would be (0 is red, orange is 15, yellow is 30, etc etc), so we can filter out any pixels that are not within a threshold of the hue we want. The saturation represents the colour's intensity. The cones have extremely high saturation, whereas the saturation of the pavement is very low. Filter any pixels with low saturation. Finally, Value represents the grayscale illuminance of the colour. I filter out any pixels that have really low value.
That leaves only the bright orange pixels. I create a series of curves, and apply a penalty to each curve dependent on the number of orange pixels within a certain distance (usually the image width/8) of the curve. The curve with the least number of orange pixels within that range "wins" and its curvature is used to determine the steering output. Note that in the figure, due to my ineptitude, curves with high penalties are green and curves with low penalties are red. Sigh.