It's all about finding the pixels that fit's the properties you are looking for.
Using OpenCV and Python, makes the process alot easier:
You can download OpenCV for Python here: http://code.google.com/p/ctypes-opencv/
It's OpenSource as you might guessed.
Connecting to your camera is as simple as:
# Create a Window
# Connect to the camera
capture = cvCreateCameraCapture(0)
# Grab the current frame from the camera
frame = cvQueryFrame(capture)
# Show the current image in MyWindow
# Wait a bit and check any keys has been pressed
key = cvWaitKey(10)
To do some simple color-tracking, create a mask with all the pixels withing a given range of colors:
# Create a 8-bit 1-channel image with same size as the frame
color_mask = cvCreateImage(cvGetSize(frame), 8, 1)
# Specify the minimum / maximum colors to look for:
min_color = (180, 20, 200)
max_color = (255, 255, 255)
# Find the pixels within the color-range, and put the output in the color_mask
cvInRangeS(frame, cvScalar(*min_color), cvScalar(*max_color), color_mask)
OpenCV works in BGR color-space. HSV or Lab color-space might be a better choice
To convert the frame into HSV use:
cvCvtColor(frame, frame, CV_BGR2HSV)
Just remember to set the color-space back to BGR before displaying the image:
cvCvtColor(frame, frame, CV_HSV2BGR)
Next, split the mask into seperated - connected parts, by finding the contour:
storage = cvCreateMemStorage(0)
c_count, contours = cvFindContours (color_mask, storage, d=CV_CHAIN_APPROX_NONE)
# Go trough each contour
for contour in contours.hrange():
# Do some filtering
From here it's just a matter of filtering out the contours you don't want. Some might be too large or too small, or maybe you only want convex shapes:
# Get the size of the contour
size = abs(cvContourArea(contour))
# Is convex
is_convex = cvCheckContourConvexity(contour)
When you've got the contours you want, find the center-coordinates:
# Find the bounding-box of the contour
bbox = cvBoundingRect( contour, 0 )
# Calculate the x and y coordinate of center
x, y = bbox.x+bbox.width*0.5, bbox.y+bbox.height*0.5
Now that's teory.. In the real-world, you got lightning, shadows, noise, and so on. But i hope you got the basic idea how to do color-tracking by now.
For a more complex tracker, you'll might wan't to look at Motion-Segmentation, Background statistics, Dilate, Erode, FloodFill, PolyApprox...
It's all in OpenCV.
Here's a demo video of the color-tracker in action. I'm using Qt as GUI, and a Pickle-Socket that sends the computed data to Maya in form of a dictionary.
I use a Maya-Plane that represent my screen in 3D-space, so that it's easy to adjust the relationship between screen-space and 3d-space.
The Maya-plane can have any number of reference-objects that are constrained to the 3d-model. The tracked objects will automaticly constrain to a reference object, when it get's close enough.