- 12 1月, 2006 1 次提交
-
-
由 Randy.Dunlap 提交于
- Move capable() from sched.h to capability.h; - Use <linux/capability.h> where capable() is used (in include/, block/, ipc/, kernel/, a few drivers/, mm/, security/, & sound/; many more drivers/ to go) Signed-off-by: NRandy Dunlap <rdunlap@xenotime.net> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 07 1月, 2006 1 次提交
-
-
由 Martin Schwidefsky 提交于
There are some more places where the use of cputime_t instead of an integer type and the associated macros is necessary for the virtual cputime accounting on s390. Affected are the s390 specific appldata code and BSD process accounting. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 08 11月, 2005 1 次提交
-
-
由 Al Viro 提交于
The way we currently deal with quota and process accounting that might keep vfsmount busy at umount time is inherently broken; we try to turn them off just in case (not quite correctly, at that) and a) pray umount doesn't fail (otherwise they'll stay turned off) b) pray nobody doesn anything funny just as we turn quota off Moreover, LSM provides hooks for doing the same sort of broken logics. The proper way to deal with that is to introduce the second kind of reference to vfsmount. Semantics: - when the last normal reference is dropped, all special ones are converted to normal ones and if there had been any, cleanup is done. - normal reference can be cloned into a special one - special reference can be converted to normal one; that's a no-op if we'd already passed the point of no return (i.e. mntput() had converted special references to normal and started cleanup). The way it works: e.g. starting process accounting converts the vfsmount reference pinned by the opened file into special one and turns it back to normal when it gets shut down; acct_auto_close() is done when no normal references are left. That way it does *not* obstruct umount(2) and it silently gets turned off when the last normal reference to vfsmount is gone. Which is exactly what we want... The same should be done by LSM module that holds some internal references to vfsmount and wants to shut them down on umount - it should make them special and security_sb_umount_close() will be called exactly when the last normal reference to vfsmount is gone. quota handling is even simpler - we don't use normal file IO anymore, so there's no need to hold vfsmounts at all. DQUOT_OFF() is done from deactivate_super(), where it really belongs. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 30 10月, 2005 1 次提交
-
-
由 Hugh Dickins 提交于
I was lazy when we added anon_rss, and chose to change as few places as possible. So currently each anonymous page has to be counted twice, in rss and in anon_rss. Which won't be so good if those are atomic counts in some configurations. Change that around: keep file_rss and anon_rss separately, and add them together (with get_mm_rss macro) when the total is needed - reading two atomics is much cheaper than updating two atomics. And update anon_rss upfront, typically in memory.c, not tucked away in page_add_anon_rmap. Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 11 9月, 2005 1 次提交
-
-
由 Randy Dunlap 提交于
for kernel/acct.c: - fix typos - add kerneldoc for non-static functions Signed-off-by: NRandy Dunlap <rdunlap@xenotime.net> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 08 9月, 2005 1 次提交
-
-
由 Peter Staubach 提交于
There is a problem in the accounting subsystem in the kernel can not correctly handle files larger than 2GB. The output file containing the process accounting data can grow very large if the system is large enough and active enough. If the 2GB limit is reached, then the system simply stops storing process accounting data. Another annoying problem is that once the system reaches this 2GB limit, then every process which exits will receive a signal, SIGXFSZ. This signal is generated because an attempt was made to write beyond the limit for the file descriptor. This signal makes it look like every process has exited due to a signal, when in fact, they have not. The solution is to add the O_LARGEFILE flag to the list of flags used to open the accounting file. The rest of the accounting support is already largefile safe. The changes were tested by constructing a large file (just short of 2GB), enabling accounting, and then running enough commands to cause the accounting data generated to increase the size of the file to 2GB. Without the changes, the file grows to 2GB and the last command run in the test script appears to exit due a signal when it has not. With the changes, things work as expected and quietly. There are some user level changes required so that it can deal with largefiles, but those are being handled separately. Signed-off-by: NPeter Staubach <staubach@redhat.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 17 4月, 2005 1 次提交
-
-
由 Linus Torvalds 提交于
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
-