Mirsad Todorovac wrote:
Why without? AFAIK, PROT_EXEC is (roughly speaking) the software side of hardware NX.
Could be on some platforms. AFAIR, there was a platform (Linux?, it's so dim in my memory) which did not implement PROT_EXEC protection. AFAIK, Linux kernels generally did not until 2.6 versions.
According to Wikipedia, since 2.6.8.
But you must have knwon this, and this is a backend issue I suppose.
Yes, it's an OS and backend issue.
I am putting it on my TO-DO list. It is a very interesting issue in general, for all operating systems that use paging virtual memory. (OTOH, comming back to releasing unused holes, unused pages will probably be swapped-out and not reloaded again since not used - the problem is stochastic alloc/dealloc of relativelly small fragments of memory. For example, if the average size of records is from 512 to 1023 bytes, this will use a lot on 2^10 heap, but after allocations/deallocations go into asymptotic stable state there will be normal distribution of used and unused areas on each memory physical page. This means that the allocated physical pages of heap will eventually double the program's memory needs. I may seek for literature, right now I speak by memory of those lectures about Unix processes.)
I'm really no expert here, but what assumptions are these simulations based on? Since deallocated fragments are reused, the maximum allocated space for a given chunk size is the maximum number of active allocations at any time (rounded up to the page size). Why should they be twice the program's memory needs?
Perhaps this model somehow assumes random selection of newly allocated fragments? Otherwise it seems obvious to me that the distribution is not the same for all pages, i.e. the lower pages (first tried for reuse) should generally be more densely populated since deallocated fragements on them are reused very soon.
I see. I realize adding security measures drastically impacts performance (such as making all pointers "volatile" variables which cannot go to registers),
I'm not sure this is necessary, this seems too drastic.
but having an important system brought down on it's knees by undetected buffer overrun in an application will hurt me more both as a system administrator and as a software developer than the 20% decrease in program speed. IMHO.
OTOH, I'm not sure the less drastic measures will cost only 20%. As registers are basically "L0 cache", ignoring them for all pointers might suffer an enormous penalty for pointer-intensive code. But again, we're talking about hot air, without any real data.
I understand your concerns as an admin, but really I don't think such drastic measures are the solution in the long run. They are no real replacement for proper engineering techniques.
But if you're ready to go so far, you might considering running the whole thing in a sandbox or virtual machine with very limited privileges. (Of course, this shifts the security problem to the sandbox/VM code, but that's one program instead of N programs.)
IMHO, canary ought to be checked on free(). This would catch a number of errors, since most common error alongside buffer overrun is probably of-by-one error in loop.
This would serve as a debugging aid (similar to efence), not as attack prevention, as an attack can occur before free().
True. However, several attack scenarios rely on smashing alloc list pointers and overwritting arbitrary location in memory. A canary could prevent that, if checked prior to evaluating pointers that follow it. :-)
So you're back to checking every access, not only free() ...
And, BTW, this is also an area of its own, with its experts. This does not mean we should not care here, but one should really first study existing work and state of the art. If implementation of some techniques require compiler support, we can discuss them here, but this is not really the place to design new techniques (which most likely will have been discussed by the experts already).
I guess you want to tell me I need to do more homework before raising similar issues, so I will try to do it next time. However, it is hard to become expert in a month, so I was relying on your experience ;-)
And also that there are probably not too many real security experts here. I know a bit about security, but developing new concepts (as you seem to plan) might better be done with the experts.
Adriaan van Os wrote:
Mirsad Todorovac wrote:
I see. I realize adding security measures drastically impacts performance (such as making all pointers "volatile" variables which cannot go to registers), but having an important system brought down on it's knees by undetected buffer overrun in an application will hurt me more both as a system administrator and as a software developer than the 20% decrease in program speed. IMHO.
Back to reality. You are not obliged to build buffer overruns into your software, are you ?
;-)
Of course, programmers do make mistakes.
I don't know which particular overrun this was, but chances are it would have been prevented by range checking (at least many of the common overruns would have been). IOW, as long as many programmers don't even use rather simple available techniques (either by chosing environments that don't offer them, or by stepping around them intentionally), any more invasive measures will probably have little effect either. This doesn't mean one shouldn't consider them, but for practical purposes, in the short to medium term, they will likely not make any noticeable difference. Which again means, if your main concern is the security of an actually, currently running system, you'll probably have better luck with more conventional security measures.
Frank