I’ve started coding a mock up of the algorithm using Java (simply because it’s so easy using Java!) and have implemented the portion where the target colour is chosen. The idea is that during the robots startup phase, the person stands centrally to the robot wearing a plain T-Shirt that doesn’t have the same colours as the background. This initial colour is the colour the robot will use as part of its blob recognition to track the person.
Currently the code will take in a picture of someone standing central in the image. Using our models: Chris, Anthony and me, and my wonderous phone camera skills, the program analyses each pixel in the image systematically and turns the pixel white if there’s a hit, and black if it’s a miss. This is done with some basic statistical comparisons but not before every colour is normalised so that is completely discards any information regarding the intensity of the colours. This means that even if the lighting is very very dark, it is still able to know what colour it is. An averaging filter is then applied on the binary image to calculate the position of the person. This will discard any isolated pixels in the resultant image, therefore eliminating noise. Below are three of the pictures I’ve used and the binary image produced after applying the program. The intersection of the green lines indicate the centroid of the main blob, indicating the calculated position of the person.