All 4 entries tagged Research
January 18, 2007
We took the lawnmowers out for a run a couple of days ago to collect some test data for analysis.
We attached a camera (A Panasonic NV-GS11) to each of the lawnmowers using a ‘magic arm’ and then took them for a drive around on the patch of grass just outside the IMC (next to DCS).
At first glance the images are interesting, there’s far more motion blur than I had anticipated though this is largely down to the NV-GS11 being a consumer camera and not designed for machine vision. We do intend to eventually use a proper machine vision camera which will give us control over shutter speeds but deciding what we need in terms of performance requires some pre-requisite work.
For an idea of the images:
I’m hoping to adapt an implementation of the Kanade-Lucas-Tomasi feature tracker to provide tracking between frames which should allow the extraction of movement data.
If anyone’s looking to play around with this kind of stuff there’s a great public-domain KLT implementation at http://www.ces.clemson.edu/~stb/klt/ . You essentially feed in an image, it selects N features to track and then lets you find out their movement frame-to-frame. Quite useful.
January 04, 2007
Seems like my plan to blog my PhD progress stumbled a bit. Here’s an attempt to restart it.
We’ve now got two mowers in the project, both of which we hope to turn in to autonomous robots.
The first is the Ransomes Spider, a remote control mower:
The second is the newest addition, the Jacobsen E-Plex II. The E-Plex is a ride-on mower.
We’re hoping to mount a Sick PLS devices to them. These are proximity laser scanners, you hook up to them via RS232 (serial) and get two sets of 180 values every second, representing the distance for each degree of their view, along their 2D plane.
Well, that’s all for now. My first task this week is to get familiarised with the E-Plex II and then figure out how to mount the Sick PLS scanners.
November 07, 2006
I guess my promise of writing up things on the 1st of November didn’t really materialise but I thought i’d take a stab at it now.
My PhD’s in the Warwick Automation Research group, we’re on the second floor of the International Manufacturing Centre, overlooking Computer Science and Maths.
One of our projects and the area i’m involved in is the design of autonomous robots.
At the moment we’ve got one of these:
and we’re getting a ride-on purely electrical lawnmower (EPlex II) courtesy of Ransomes in a fortnight’s time.
The aim is to turn both in to autonomous robot platforms that can then be adapted to perform specific tasks like keeping a golf course cut and helping farmers with pasture management.
To start with, we’ve got four of these (though one’s refusing to work properly):
They are proximity laser sensors. Nifty devices, not little or light though. They scan the 180 degrees in front of them and return you sets of 180 values (one for each degree) twice a second over rs232. Each value represents the distance at that position. It only functions in an x-y plane but gives enough information to make a reasonable last-ditch safety system preventing the robot from colliding with obstacles.
We are toying with the idea of using a single Sick PLS sensor to map out the ground ahead of the robot. This could be used to aid the eventual vision system.
The first problem i’m focussing on is that of getting accurate positioning information for the robot. While GPS can give us a rough position (assuming we can see a couple of satelites) as we can’t rely on having differential gps correction signals available, we need to incorporate other methods.
One of the possible methods is to attach magnetic strips and sensors to the Spider’s wheels and drive chain in order to get odometry information that can be used to ‘fill in the gaps’ (or be used with a Kalman filter) to better calculate the robot’s position.
Another possibility is borrowing an idea from the world of computer mice. Modern optical mice are pretty neat little devices. Wikipedia have a little section on optical mice . Essentially, they involve tiny cameras (normally only 15-20 pixels square) at very high frame-rates (1000fps+). They compare successive pictures from the cameras to figure out how far and in which direction you’ve moved.
I’m currently investigating doing just that but with a high-speed camera mounted perpendicular to the ground off to the side of the robot. This has the advantage that it, as a sensor, could be move from one platform to the other with ease. At the moment, i’m looking in to methods such as optical flow in order to work out movement.
I went to Abbey Fields this afternoon to take lots of photos of grass. That got some odd looks.
I’ll let people know how experiments are going at the end of the week.
October 22, 2006
Despite trying to tweak every setting in BlogBuilder to avoid doing so, i’ve managed to actually to start writing my first blog entry this year.
I’ve deleted most of the older posts relating to my undergraduate course as they were a rather jumbled bunch and didn’t make much sense and thought i’d start afresh.
First off, introductions. My name is Sadiq Jaffer, I studied for an MEng in Computer Systems Engineering at Warwick between 2002-2006 and am now starting a PhD under Dr. Ken Young in the Warwick Manufacturing Group.
The PhD is focused on the area of agricultural automation through the use of autonomous robots, which in itself is a fairly large area and I imagine it’s going to be a little while before I can actually pin down a small area for my thesis. Given my background i’d imagine it will involve vision.
I’ve decided to start blogging about my thoughts and progress along my PhD in the vague hope that it will keep things on track and on schedule.
In my spare time, I run an online business selling geeky t-shirts, develop computer games and breed tropical fish.