Posted on : 19-01-2011 | By : Yuri Vashchenko | In : Uncategorized
A modern video analytic system depending on business/customer requirements should work in different situations/conditions. Complex, noisy background with many different objects/textures, changing lighting conditions, shadows, lack of light, weather conditions (for outdoor system installations) like rain, snow, fog and others, motion blur, camera movements, cameral sensor quality, camera resolution, camera focus issues, camera internal optimizations, color temperature, end many other factors make development of the good object recognition software a challenging, almost impossible task. In addition, the usual requirement is that the system should work in real time, which makes this task even more difficult.
So, even having current high-performance hardware, developers have to find a balance between the algorithm quality and speed (performance). Fixing a small quality issue sometimes causes significant performance degradation.
To keep this under control, a consistent unit testing should be performed with every algorithm change. To do this, Rhonda Software uses the unit test approach as described below:
- A set of metrics is prepared. These metrics definitions describe “ground rules”, i.e. how actual logs from the system under test is interpreted, what is considered correct and what is not correct. For example, for people counting metric the following definition may be used:PEOPLE COUNTING
<visitors_number> total number of visitors in test frame range = Correct + Missing + Unexpected + False
Real visitors (all found and not found visitors besides False) = Correct + Missing + Unexpected
Correct: Log visitors associated with test visitors, even if more than one log visitor is associated with one test visitor (only if test visitor had Hard detection status between frames were previous and next log visitor were correlated).
Missing: Not found in log
Unexpected: Log visitor is associated with earlier associated test visitor, but there is no Hard visitor detection status between these log visitors.
False: Log visitor is not associated with any test visitor
- For each metric a set of KPIs (Key performance indicators) is defined. For example, for people counting metric the following KPIs may be defined:
# of Total visitors
# of Real visitors
# ofFalse visitors
# of Missing visitors
False visitor rate (percent)
Missing visitor rate (percent)
Counting error rate (percent)
- Some KPIs may have a goal. For example, we may want to have Counting error rate less than 3%.
- A set of test videos is created. Typically, there are dozens of videos prepared for the project to cover as many different situations/conditions as possible.
- Project specific marker tool is used to create a special “markup”, or Meta information describing the objects located at each of input video. Usually, it is an xml file having the same name as an input video file. A specially trained engineer uses this tool to open a video file, go through selected frames and mark objects on them. For instance, for demography detection software, an operator may mark all persons found on a frame, specify coordinates location of their faces and add special attributes for every face like gender, ethnicity or age category. Some objects may be marked as “hard examples”. It specifies that this object is hard to detect/recognized due to different situations, like had motion blur, partially cover by another object, etc. By other words, a hard example is an object that potentially could be detected/recognized by the software, but it is not very likely, so it is ok if the software does not detect/recognize this object. Markup files are stored together with the video test files on a dedicated server for easy access.
- Software under test is executed in a special logging mode. This mode tells the software to log everything it detects/recognizes and put it into a special (usually xml) log file. A specially prepared set of video files is given to the software version as an input. For each input video file a corresponding xml log is created.
- Another project-specific tool, comparer is then used to get KPI values. The tool uses prepared markup (see 5) and actual data values from log (see 6). As a result, set of KP values is generated.
Metrics, KPIs and markup are created once per project/video. Testing and metrics calculation can be made for every release to see how this release changed in terms of quality and speed. This allows quickly detecting and fixing possible quality/speed degradations introduced in the release. In addition, when tested periodically, project management can see how the system evolves over the time in terms of quality/performance.
While making video for testing and markup of this video requires people to do, most of other unit testing activities could be performed automatically which helps keeping the quality and performance under control and requires little effort to perform. All of above allows us to develop state of the art, highly competitive software.
The video above explains the process of markering a video file.