Rhonda Software

Highest quality full cycle software development.

Expert in areas of Computer Vision, Multimedia, Messaging, Networking and others. Focused on embedded software development. Competent in building cross-platform solutions and distributed SW systems.

Offer standalone custom solutions as well as integration of existing products. Opened for outsourcing services.

Visit us at: http://www.rhondasoftware.com

Detect attention, please!

Posted on : 09-11-2009 | By : rhondasw | In : OpenCV

1

Nowadays, different audience measurement systems become more and more popular. They are used in active advertising, for gathering statistics, etc. One of the key features of these smart systems is attention detection.  For advertisers, for instance,  it seems very  important to know, how much attention commercial attracts. In this article, I will describe attention detector module, used in our Audience Measurement system.

We started our work with attempts to understand, what attention we want to detect.  On the one hand, It seems very easy  to say, if person has attention or not.  On the other hand,  it’s very difficult to formalize: What attention is.  In some articles, it’s considered to detect attention based on eyes information.  But if person wears sunglasses? Another “criteria” is to to use nose information: Where nose points at! For our business case nose information is not enough either. More over, nose can also be “hidden”.

That’s why our attention is based on head pose information. We collected face images, which in our opinion, have attention and don’t have attention.  To be honest, most of images with attention were frontal faces and vice versa.  These two sets resolve task.

To teach machine to detect attention , we need a machine learning algorithm. We have Viola Jones one and if it can detect face/non face, why not use it to detect attention/non attention? Learning samples we have… So with Adabost, we chose 100 Haar-like features. With them, each image is converted to 100-dimensional vector.  To classify it, we used C4.5.  Self-test was very good: 97% accuracy.  But when started testing on real video, we had bad result: 60% accuracy. The problems begun, when  lightning conditions were modified or face shifted some pixels, even despite the fact that, we used normalization like in OpenCV Viola Jones algorithm. The matter is that, face and non-face images are very different, but faces with attention and with non-attention are very similar.

Thus, we needed lightning-invariant method, which is not so sensitive to XY-shifting.  We developed our own template-matching method.  First, using PCA, we get templates of face. With these templates, each face is  converted to N-dimensional vector, which is classified with SVM. Accuracy of our attention system is about 90%.  Its working you can see in our Audience Measurement system.

Comments (1)

[…] it with OpenCV haartraining. That’s why for such classification, we used our own gender and attention classificators. Of course you can use Adaboost for this task, which is implemented in haartraining, […]

Write a comment