We took the lawnmowers out for a run a couple of days ago to collect some test data for analysis.
We attached a camera (A Panasonic NV-GS11) to each of the lawnmowers using a ‘magic arm’ and then took them for a drive around on the patch of grass just outside the IMC (next to DCS).
At first glance the images are interesting, there’s far more motion blur than I had anticipated though this is largely down to the NV-GS11 being a consumer camera and not designed for machine vision. We do intend to eventually use a proper machine vision camera which will give us control over shutter speeds but deciding what we need in terms of performance requires some pre-requisite work.
For an idea of the images:
I’m hoping to adapt an implementation of the Kanade-Lucas-Tomasi feature tracker to provide tracking between frames which should allow the extraction of movement data.
If anyone’s looking to play around with this kind of stuff there’s a great public-domain KLT implementation at http://www.ces.clemson.edu/~stb/klt/ . You essentially feed in an image, it selects N features to track and then lets you find out their movement frame-to-frame. Quite useful.