A closer look at Anti-Malware tests and the
sometimes unreliable nature of the process.
Imagine one of your children is a high school
student and due to take university or college entrance exams. You know the
exams are due at some time during the year, have an idea what the exam may
include and are patiently awaiting notification for more details. Regardless of
not knowing exactly what is in the exam, your diligent student stays home and
studies hard in preparation.
At the very last moment your student hears,
unofficially, that the exam is scheduled, but no details are available.
Stressed and frustrated, your student does get included in the exam.
After the exam you find out that some
students were sent notifications and details on the content of the exam: they
were also allowed to take certain equipment with them to assist their
participation. After the exam, those same students were given the opportunity
to see their results and negotiate the scoring of the questions they got wrong
to improve their final score.
I suspect you would be upset, maybe angry,
and would likely object to the unfair process that your child endured. Would it
be fair if a university or college then used the exam results to make a
decision on whether to offer your child a place?
The anti-malware industry and its ability to
detect the attempts of cybercriminals to harm or render systems useless may
often appear to be a dark art to people looking in from outside the industry.
That’s the very reason why testing the efficacy of products is important, so
that you don’t need to be an expert to understand if a product works well, or
not.
However, tests are only as good as the
competence and ethics of the tester and while most practice good ethics there
are some that don’t. For example, if a test has questionable methodology, or
includes vendor participants with a special relationship with the tester, or
allows some vendors to optimize their products, or is just run badly, then the
results are brought into question.
The security industry has grappled with this
very issue for some time: the formation of the Anti-Malware Testing Standards
Organization (AMTSO) in 2008 was intended to bring the two sides, testers and
vendors, together and create a forum for dialogue. The purpose and charter of
AMTSO, in summary, is to provide such a forum, create standards and best
practices, provide education on testing and to create tools and resources to
aid standards-based testing.
AMTSO is in the process of creating
standards: they published a draft agreed, in December, by the membership, made
up of testers, vendors and academics. It would seem reasonable that a tester
member of AMTSO should conduct tests adhering to the draft standards, after all
they are part of the organization that created them, and some testers have
conducted tests based on the draft, with success.
Any test run, even without formally following
the standards, should be conducted using fair and unbiased conditions. For
example, if some vendors are given the opportunity to configure their product
to optimize the final result or are granted other privileged access during the
test, then all vendors should be afforded the same advantages. If the
playing field is not level then it should be clear who had the benefits and
more importantly who did not.
There are further questionable practices:
what if a vendor pays to be tested just before a group test? Should this be
noted in the test result that follows? Imagine the scenario where a test
methodology is published, a vendor pays to be tested against it to see what
result they may achieve. When the actual test is run they may have optimized
detection to suit the test, but does this reflect the result a purchaser of the
product could expect to see in normal use? Probably not.
After a test has been conducted there is
typically a period of time where vendors are given the opportunity to validate
the results: that is, decide whether they agree with what they missed or
wrongly detected (known as a false positive). In my experience, some testers
use this later stage to monetize their testing – if you want the results to
validate then you need to pay – while other testers only allow certain vendors
to validate their test results. Segmenting vendors so that only certain ones
are allowed to validate results creates test results that cannot be used to
compare products fairly or accurately.
When a Chief Security Officer (CSO) picks up
a report showing the efficacy of anti-malware products, it is only natural to
be drawn straight to the graph that displays the percentage of malware
detected. And when a vendor’s marketing team members use the test results they
only include the graph. If the tester has hidden, deep in the report, some of
the inconsistencies of the terms under which different vendors participated,
they are unlikely to ever be read or considered when looking at the final
results.
If the test report is going to be used to
make a crucial decision on what protection to select then it’s critical that
the methodology, commercial relationships and ethics behind the test are taken
into account. If this information cannot be gleaned from the information in the
report then contacting the tester for clarification is a must.
It’s important that a test takes place on a
level playing field and that all the teams taking part are afforded the same
conditions, opportunities and validation options. If they are not, then the
results are biased in favor of the vendors that were afforded privileged
conditions, and the results belong in the circular grey filing cabinet under my
desk.