Vision System

A major aspect of the control of the robot is the vision system. Major strategic decisions will be made based on the feedback from the vision system. The system will be composed of an off-the-shelf web-camera, in tandem with the image processing capabilities of the Microsoft Robotics Studio software. The system will be composed of three main parts, blob detection, color segmentation, and overall analysis.

The first job of the vision system is blob detection. Blob detection will be the main sensory input for the “Look for Balls” process. Blob detection inside of Microsoft robots studios is quite advanced, and does not require extensive image processing knowledge to be used. The first step in blob detection is calibration.

Calibration of the blob detection system is key to overall performance, and is done very well by Microsoft robotics studio. In Microsoft robotics studio, there is a built in tool named “Blob Tracker Calibration”, which allows the user to select the color of the blob from a live video feed. This effectively makes the color selection field programmable, and will allow for calibration on-site, to avoid issues with lighting and other chaotic environmental conditions. The goal is to have the user program the blob detection color when the robot is initialized, as to always ensure optimal tracking conditions. This, along with the fact that the testing environment has uniform lighting conditions indoors, ensures stability in blob detection. Testing is carried out on site using the built in blob detection and is proven to be very accurate.

The blob detection algorithm is based speed. The system uses a live feed from a webcam to perform image analysis. The speed of image analysis is based on the speed of the system running the software. Testing proves an approximate rate of 20fps using the hardware selected for this project. The algorithm focuses on analyzing the colors in the image, and creating a convex hull around the colors that are to be tracked. Below is a video of the blob detection in Action:

The final stage of the vision system is to decide on which blob or single ball to pick up. Color segmentation is the process of segmenting an image into different clusters of the same color. This is important to the vision system, as it allows the robot to get a general idea of where the balls are distributed, and decide how to proceed based on analysis of this information.
Color segmentation will be done through Microsoft Robotics Studio’s built in functions. Much like blob tracking, Microsoft Robotics Studio provides a calibration tool for color segmentation, that is also field programmable. The actual color segmentation tool in Microsoft Robotics studio then runs on the live webcam feed. It should be noted that color segmentation is far more computationally intensive that blob tracking, and therefore will only be run once a blob of balls is found, to prevent unnecessary stress on the system.

The color segmentation tool returns an array of segments in the screen. This array contains the x and y coordinates of the centroid of each segment found. Based on this array, analysis can be performed to determine the best plan of attack to pick up the largest amount of balls. The first process in this decision is distance.

Based on the two fixed variables y (distance from the ground to the camera) and Φ (angle of camera from ground), analysis can be done on the Y position on the image to determine the distance from the ball. Initial calibration will be needed to determine the exact distance of the ball based on the placement of the ball in the camera view. It should be noted that due to the nature of an angled camera, the higher the value of h1 (from diagram), the farther the ball from the robot. This increases on an exponential scale, as the camera can see far into the horizon.
Once the distance to each ball segment is determined, the algorithm will then analyze the optimal decision on where to proceed. While distance is important, the distance of one ball to the next ball is also important, and will be weighed into the algorithm. Testing will be carried out to determine which weighting of distance to ball versus distance to next ball is optimal.
Finally, the segmentation algorithm will also be used to determine the approximate number of balls stored on the robot. Based on the fact that the robot will be aware when it is approaching and picking up a ball, a count will be maintained to allow the robot do determine when the maximum 15 balls are reached and that it should proceed to ball drop off.

Leave a Reply

Your email address will not be published. Required fields are marked *