Interesting, thanks. Now, that’s for the URL scan, but see that the file scan is the one that I’m more interested in, not just for this particular case.

I want to understand why the file scanners are not reporting a clear tendency.

For a new malware, low percentages might be expected, as with a FP. But for a malware that is not so new, either the webmaster can solve the problem (and the percentage decreases again), or the tendency would be for all engines to find it.

For a FP, the tendency on the long run would be also to low percentages, since I assume not “each and every” scan engine will detect the index.html files as FP at the same time, and the FP would be corrected eventually.

In case the webmaster fails to solve the problem (he doesn’t even know the site was hacked), then all the engines would tend to report malware on the long run.

The same trojan is found in the 4 index.html files (of the above 4 addresses) and this is the situation for months, but not for all engines.

So, having values between, say, 10/44 to 30/44 during long periods of time (months) is what sounds strange to me.