Igor,

Hmmm, you are probably right. I was basing my comment on some perl I wrote a while back that processed files in a directory, I found that if I didn’t write anything to the screen, the glob and process was really really fast. However that certainly wasn’t a disk wide process, and now that I think about it, I never actually checked the processor usage. One wonders though.

Yeah, you are right about content changes, as you point out, they should be irrelevant during the period of the filecount scan, and if you used that to give you an “estimate” of the total number and used that as a measuring stick, that should work.

I was thinking of the disk space thing last night… The result of doing that ought to work, especially if you used it as a progress meter without any sort of percentages or “value” attached. In other words, just progress the progress meter strictly by the amount of used disk space checked. This of course means it might progress really slowly, then jump forward when it hits a bunch of directories with nothing but txt files, and absolutely crawl (relatively speaking of course) when it runs through a directory of archives…

Fundamentally, the same procedure ought to work on folder based and individual file scans, although the amount of code required to produce a bar that only lasts for a millisecond or two on individual files and small folders might not be worth the effort…

Ho humm! Interesting to think about though!
Thanks.
Scott…