Vlk,
Hornus, thanks for the nice reading... again.
You’re quite welcome. I hope you enjoyed reading it as much as I did writing it. Software engineering is my passion, but unlike most SWE’s, I enjoy the technical writing aspects as much as the development aspects. Most that I know, hate it with such a fervor that they would gladly sacrifice a testicle if it would get them out of writing documentation.
Having been involved in virtually all aspects of the product life cycle, I have an exteme appreciation for the job you guys do.
I don't quite agree - in many cases it is really advantegous to grab a bigger piece of memory at once, and then use a custom sub-allocator to partition it into actual memory blocks. That's because the system allocator is of course as generic as possible, providing poor performance in certain situations where a simple custom allocator would in fact provide a big performance boost.
You’re absolutely right here. By custom tuning the memory management algorithm for the application, fragmentation and the number of OS calls can be greatly reduced and provide other efficiencies.
When I referenced the good old DOS days, I was thinking of the situation where basically only one application could run at a time, with the exception of TSR’s. Since the OS, device drivers, the extended and/or expanded memory managers, which were basically OS extensions, and most of the TSR’s got what they needed up front during bootup, any of the 640K memory space available after the program loaded could and should be freely used without any other considerations to allocate memory blocks and load and unload overlays.
As for the Windows 3.1 environment (and I should have written 3.x), I was thinking of Real Mode and Standard Mode, which didn’t support virtual memory) and the early days of Enhanced Mode, which did. The average software engineers were either too ingrained with past experiences, or having only a rudimentary unstanding of the concepts, couldn’t use it effectively to the benefit of whole computing environment. The better ones designed applications to grab as much memory as the anticipated needs dictated knowing that other applications couldn’t be relied on to get and release memory on an as-needed basis. (It was better to make the other guy sweat an out of memory condition.
) And of course, they were trying to boost their own program’s efficiency by eliminating the overhead of acquiring and releasing memory, especially the context switching involved everytime an OS call was made that required changing the processor to and from Protected Mode.
Look at the behavior of many server-based apps - they allocate huge memory blocks at startup, and it's (usually) really for a good reason. The extreme example of this is the MS SQL Server which (by default) almost always takes about 85% of free RAM at once on startup ( even if your machine had 4G of RAM, SQL would grab about 3.5G at startup).
I wasn’t taking into account servers and server applications, which have a totally different set of requirements and rules of engagement as it were, and I’m glad you pointed that out. Your example of a server running MS SQL is an excellent example. Any server, or cluster, operating as a database server is for all intents and purposes running a dedicated task, file and print servers, web servers, and mail servers not so much.
As such, and seeing first-hand the excellent engineering in the home edition, I know that Alwil considers it necessary to go to the considerable trouble and expense of developing separate products for networks and servers to best meet the unique requirements of it’s customer’s many different environments. Kudos for taking that route instead of producing a jack-of-all-trades and master-of-none.
Well in fact the EXE images are implemented via memory-mapped files, not ordinary buffer reads. Thus no buffers need to be preallocated (but virual address space need to be reserved [not commited] of course).
I didn’t know this. You’ve given me something tasty to dig into. I’m always
when someone can clarify an issue for me or correct a misunderstanding.
Anyway, I wanted to say that we've changed some allocation patterns in the upcoming avast update - check out the Mem Usage column as soon as you get the update...
That’s good news for all of us. Many thanx to you and the rest of the the A-team for continually striving to improve an already excellent product, especially with much of it based on user feedback.
Regards,
Hornus