Do you know the generic method in which antiviruses use to detect specific malware based on heuristics? e.g: if the malicious file was not in the database of clean/malicious MD5s?
First way of generic heuristics is to run a file inside a sandbox environment, if the file “walks” and “quacks” like malware in a VE, it possibly is malware.
But malware can behave differently when scanned and may try to evade detection. That is why “our” scanning is so valuable.
Second method is file analysis and watch the file’s behavior. If it behaves like malware, it probably is or could be malware.
Third way is generic signature detection. So we see if the file matches a malware family or malware subfamily classification.
When it does it might be a new variant of that strain of malware.
The other side of the generic detection coin is called FPs.
Make an analysis with Anubis and you will get a prediction of what you may have scanned, based on the criteria given here.
There are other characteristics (source (domain, IP, AS), signature of software, protection (packers or crypters used) to help this process,
exploits used)
In this case it is the self-signed driver that is missed…Read my posting here: http://forum.avast.com/index.php?topic=104551.0
So a malicious file that executes a certain script at random cannot be detected?
If it is in a sandbox, couldn’t you design one executable to read data inside of the sandbox, thus make another executable to determine if it is in that specific sandbox? For example, if SandboxA had the data of X39E and SandboxB had the data of F942 and the data from the current sandbox was extracted into a variable called data…
[code=Lua]
– Assuming data, SandboxA, and SandboxB are predefined…
if data == SandboxA then
– deobfuscate & generate ‘safe’ exe payload intended for SandboxA
elseif data == SandboxB then
– deobfuscate & generate ‘safe’ exe payload intended for SandboxB
else
– no Sandbox detected; use malicious payload
end
Good conclusion. And that is why, that we should flag all randomly generated as suspicious until excluded and we take the safest option also to flag all obfuscated scripts unless excluded. Normally we deal with malware campaigns and there certain patterns stand out and will make detection more easily.
In the case of website malcode there are more factors that will weigh into the balance. What malware history we have from that source earlier, what software is exploitable, what attack vector was being used, what kind of malcode was added or spread by malcreants,
You are right there, because that one creates a fake Recycle Bin folder and that gained some vulnerability gap time for it.
See the analysis done here: http://www.threatexpert.com/report.aspx?md5=097067998ba0b976f1d7577e2abd6d3b
But you have to wait a bit for the others to catch up as avast and some others are detecting it now, see the latest results here: https://www.virustotal.com/file/d0024c0e21e5e6f0493ed9efdc5dca2c0ef6457239f78622e41c3f1825c96d25/analysis/ 12/42
Yes between FUD status, that means fully undetected on the cybercrimal worked element scanner to detected by 1 av takes some time,
the time from 1 av detects to over 5 detecting takes some 4 to 5 days as a rule. Besides a lot of avast detection comes outside normal file detection
and comes via for instance avast webshield or network shield detection, behavioral shield, cloud scanning etc.
Hi,
Rogue antiviruses are ALWAYS packed with,usually UPX or other static packers(e.g mystic compressor),but most of the times they use custom packers.So,antiviruses’ hashes fail greatly,in an epic way(i am serious).I guess,pattern recognition could work,the chances are too low though.The next way to stop it is,malware’s presence in memory,but then it would actually be way too late.
I don’t feel like discussing anything about Sandboxes and virtual machines since those variants have been in the wild for more than 5 years(Generally speaking about rogue av’s).The concept is that,the viruses install themselves,you don’t install them,therefore you don’t sandbox them.
Thanks for your insights on this issue, as it helps our discussion in this thread. Very interesting points, so if detection should not come at a point where as you state “it is way too late”, what is the best way to tackle the problem. Isn’t it so that this malware comes from known sources, as much as they may be migrating all over the net, and the blocking should be there where the malcreants hold shop, so to say? What are your ideas about this? What is the most practical way?
I think you misinterpreted my point and both !Donovan’s remarks about the sandbox. Meant is the VM that is used for analysis after the malware has been detected, the so-called dissection of the malware sample. We did not mean that the detection should come at the time of unknown malcode landing in a sandbox environment. No way, no one in his right mind would practice such a thing,
Hi Pol,better proactive defense would be the ideal solution,also increase perfomance of hash calculator.It seems that i underestimated this variant,from a quick search i found out that this one,is protected by a rootkit(variant of Necurs),so you can’t close the process.It doesn’t disable taskmanager,regedit etc.You can’t close it though.It’s like trying to kill an anti-virus from User mode,which is impossible.
Detecting rogue av’s is not something easy,if you remember some years ago the old variants,they just throw boxes saying that you are infected,they don’t do something that we can consider “malicious”.Now they got more aggresive,with multiple system modifications and an extra layer of protection.Something i was thinking of is that,the anti-virus process should be marked as critical by the system,so whenever the process is closed,the computer will shutdown itself.In addition,if it is marked as critical,viruses won’t be able to touch it.Doing so is not that easy and it’s an undocumented method,so we can’t be sure.
P.S:Yes i probably misunderstood your statements.