1. 09 4月, 2009 9 次提交
  2. 07 4月, 2009 27 次提交
    • M
      kprobes: support per-kprobe disabling · de5bd88d
      Masami Hiramatsu 提交于
      Add disable_kprobe() and enable_kprobe() to disable/enable kprobes
      temporarily.
      
      disable_kprobe() asynchronously disables probe handlers of specified
      kprobe.  So, after calling it, some handlers can be called at a while.
      enable_kprobe() enables specified kprobe.
      
      aggr_pre_handler and aggr_post_handler check disabled probes.  On the
      other hand aggr_break_handler and aggr_fault_handler don't check it
      because these handlers will be called while executing pre or post handlers
      and usually those help error handling.
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      de5bd88d
    • M
      kprobes: rename kprobe_enabled to kprobes_all_disarmed · e579abeb
      Masami Hiramatsu 提交于
      Rename kprobe_enabled to kprobes_all_disarmed and invert logic due to
      avoiding naming confusion from per-probe disabling.
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e579abeb
    • M
      kprobes: move EXPORT_SYMBOL_GPL just after function definitions · 99081ab5
      Masami Hiramatsu 提交于
      Clean up positions of EXPORT_SYMBOL_GPL in kernel/kprobes.c according to
      checkpatch.pl.
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      99081ab5
    • M
      kprobes: cleanup aggr_kprobe related code · b918e5e6
      Masami Hiramatsu 提交于
      Currently, kprobes can disable all probes at once, but can't disable it
      individually (not unregister, just disable an kprobe, because
      unregistering needs to wait for scheduler synchronization).  These patches
      introduce APIs for on-the-fly per-probe disabling and re-enabling by
      dis-arming/re-arming its breakpoint instruction.
      
      This patch:
      
      Change old_p to ap in add_new_kprobe() for readability, copy flags member
      in add_aggr_kprobe(), and simplify the code flow of
      register_aggr_kprobe().
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b918e5e6
    • P
      mm: add /proc controls for pdflush threads · fafd688e
      Peter W Morreale 提交于
      Add /proc entries to give the admin the ability to control the minimum and
      maximum number of pdflush threads.  This allows finer control of pdflush
      on both large and small machines.
      
      The rationale is simply one size does not fit all.  Admins on large and/or
      small systems may want to tune the min/max pdflush thread count to best
      suit their needs.  Right now the min/max is hardcoded to 2/8.  While
      probably a fair estimate for smaller machines, large machines with large
      numbers of CPUs and large numbers of filesystems/block devices may benefit
      from larger numbers of threads working on different block devices.
      
      Even if the background flushing algorithm is radically changed, it is
      still likely that multiple threads will be involved and admins would still
      desire finer control on the min/max other than to have to recompile the
      kernel.
      
      The patch adds '/proc/sys/vm/nr_pdflush_threads_min' and
      '/proc/sys/vm/nr_pdflush_threads_max' with r/w permissions.
      
      The minimum value for nr_pdflush_threads_min is 1 and the maximum value is
      the current value of nr_pdflush_threads_max.  This minimum is required
      since additional thread creation is performed in a pdflush thread itself.
      
      The minimum value for nr_pdflush_threads_max is the current value of
      nr_pdflush_threads_min and the maximum value can be 1000.
      
      Documentation/sysctl/vm.txt is also updated.
      
      [akpm@linux-foundation.org: fix comment, fix whitespace, use __read_mostly]
      Signed-off-by: NPeter W Morreale <pmorreale@novell.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fafd688e
    • Z
      ftrace: Correct a text align for event format output · 1bbe2a83
      Zhaolei 提交于
      If we cat debugfs/tracing/events/ftrace/bprint/format, we'll see:
      name: bprint
      ID: 6
      format:
      	field:unsigned char common_type;	offset:0;	size:1;
      	field:unsigned char common_flags;	offset:1;	size:1;
      	field:unsigned char common_preempt_count;	offset:2;	size:1;
      	field:int common_pid;	offset:4;	size:4;
      	field:int common_tgid;	offset:8;	size:4;
      
      	field:unsigned long ip;	offset:12;	size:4;
      	field:char * fmt;	offset:16;	size:4;
      	field: char buf;	offset:20;	size:0;
      
      print fmt: "%08lx (%d) fmt:%p %s"
      
      There is an inconsistent blank before char buf.
      Signed-off-by: NZhao Lei <zhaolei@cn.fujitsu.com>
      LKML-Reference: <49D5E3EE.70201@cn.fujitsu.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1bbe2a83
    • N
      Update /debug/tracing/README · bc2b6871
      Nikanth Karthikesan 提交于
      Some of the tracers have been renamed, which was not updated in the in-kernel
      run-time README file. Update it.
      Signed-off-by: NNikanth Karthikesan <knikanth@suse.de>
      LKML-Reference: <200903231158.32151.knikanth@suse.de>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bc2b6871
    • F
      tracing/ftrace: alloc the started cpumask for the trace file · b0dfa978
      Frederic Weisbecker 提交于
      Impact: fix a crash while cat trace file
      
      Currently we are using a cpumask to remind each cpu where a
      trace occured. It lets us notice the user that a cpu just had
      its first trace.
      
      But on latest -tip we have the following crash once we cat the trace
      file:
      
      IP: [<c0270c4a>] print_trace_fmt+0x45/0xe7
      *pde = 00000000
      Oops: 0000 [#1] PREEMPT SMP
      last sysfs file: /sys/class/net/eth0/carrier
      Pid: 3897, comm: cat Not tainted (2.6.29-tip-02825-g0f22972-dirty #81)
      EIP: 0060:[<c0270c4a>] EFLAGS: 00010297 CPU: 0
      EIP is at print_trace_fmt+0x45/0xe7
      EAX: 00000000 EBX: 00000000 ECX: c12d9e98 EDX: ccdb7010
      ESI: d31f4000 EDI: 00322401 EBP: d31f3f10 ESP: d31f3efc
      DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
      Process cat (pid: 3897, ti=d31f2000 task=d3b3cf20 task.ti=d31f2000)
      Stack:
      d31f4080 ccdb7010 d31f4000 d691fe70 ccdb7010 d31f3f24 c0270e5c d31f4000
      d691fe70 d31f4000 d31f3f34 c02718e8 c12d9e98 d691fe70 d31f3f70 c02bfc33
      00001000 09130000 d3b46e00 d691fe98 00000000 00000079 00000001 00000000
      Call Trace:
      [<c0270e5c>] ? print_trace_line+0x170/0x17c
      [<c02718e8>] ? s_show+0xa7/0xbd
      [<c02bfc33>] ? seq_read+0x24a/0x327
      [<c02bf9e9>] ? seq_read+0x0/0x327
      [<c02ab18b>] ? vfs_read+0x86/0xe1
      [<c02ab289>] ? sys_read+0x40/0x65
      [<c0202d8f>] ? sysenter_do_call+0x12/0x3c
      Code: 00 00 00 89 45 ec f7 c7 00 20 00 00 89 55 f0 74 4e f6 86 98 10 00 00 02 74 45 8b 86 8c 10 00 00 8b 9e a8 10 00 00 e8 52 f3 ff ff <0f> a3 03 19 c0 85 c0 75 2b 8b 86 8c 10 00 00 8b 9e a8 10 00 00
      EIP: [<c0270c4a>] print_trace_fmt+0x45/0xe7 SS:ESP 0068:d31f3efc
      CR2: 0000000000000000
      ---[ end trace aa9cf38e5ebed9dd ]---
      
      This is because we alloc the iter->started cpumask on tracing_pipe_open but
      not on tracing_open.
      
      It hadn't been noticed until now because we need to have ring buffer overruns
      to activate the starting of cpu buffer detection.
      
      Also, we need a check to not print the messagge for the first trace on the file.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <1238619188-6109-1-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b0dfa978
    • Z
      ftrace: Add check of sched_stopped for probe_sched_wakeup · 8bcae09b
      Zhaolei 提交于
      The wakeup tracing in sched_switch does not stop when a user
      disables tracing. This is because the probe_sched_wakeup() is missing
      the check to prevent the wakeup from being traced.
      Signed-off-by: NZhao Lei <zhaolei@cn.fujitsu.com>
      LKML-Reference: <49D1C543.3010307@cn.fujitsu.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8bcae09b
    • F
      tracing/ftrace: fix missing include string.h · 5f0c6c03
      Frederic Weisbecker 提交于
      Building a kernel with tracing can raise the following warning on
      tip/master:
      
      kernel/trace/trace.c:1249: error: implicit declaration of function 'vbin_printf'
      
      We are missing an include to string.h
      Reported-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <1238160130-7437-1-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5f0c6c03
    • L
      tracing: fix incorrect return type of ns2usecs() · cf8e3474
      Lai Jiangshan 提交于
      Impact: fix time output bug in 32bits system
      
      ns2usecs() returns 'long', it's incorrect.
      
      (In i386)
      ...
                <idle>-0     [000]   521.442100: _spin_lock <-tick_do_update_jiffies64
                <idle>-0     [000]   521.442101: do_timer <-tick_do_update_jiffies64
                <idle>-0     [000]   521.442102: update_wall_time <-do_timer
                <idle>-0     [000]   521.442102: update_xtime_cache <-update_wall_time
      ....
      (It always print the time less than 2200 seconds besides ...)
      Because 'long' is 32bits in i386. ( (1<<31) useconds is about 2200 seconds)
      
      ...
                <idle>-0     [001] 4154502640.134759: rcu_bh_qsctr_inc <-__do_softirq
                <idle>-0     [001] 4154502640.134760: _local_bh_enable <-__do_softirq
                <idle>-0     [001] 4154502640.134761: idle_cpu <-irq_exit
      ...
      (very large value)
      Because 'long' is a signed type and it is 32bits in i386.
      
      Changes in v2:
      return 'unsigned long long' instead of 'cycle_t'
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      LKML-Reference: <49D05D10.4030009@cn.fujitsu.com>
      Reported-by: NLi Zefan <lizf@cn.fujitsu.com>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cf8e3474
    • S
      tracing: remove CALLER_ADDR2 from wakeup tracer · 301fd748
      Steven Rostedt 提交于
      Maneesh Soni was getting a crash when running the wakeup tracer.
      We debugged it down to the recording of the function with the
      CALLER_ADDR2 macro.  This is used to get the location of the caller
      to schedule.
      
      But the problem comes when schedule is called by assmebly. In the case
      that Maneesh had, retint_careful would call schedule. But retint_careful
      does not set up a proper frame pointer. CALLER_ADDR2 is defined as
      __builtin_return_address(2). This produces the following assembly in
      the wakeup tracer code.
      
         mov    0x0(%rbp),%rcx  <--- get the frame pointer of the caller
         mov    %r14d,%r8d
         mov    0xf2de8e(%rip),%rdi
      
         mov    0x8(%rcx),%rsi  <-- this is __builtin_return_address(1)
         mov    0x28(%rdi,%rax,8),%rbx
      
         mov    (%rcx),%rax  <-- get the frame pointer of the caller's caller
         mov    %r12,%rcx
         mov    0x8(%rax),%rdx <-- this is __builtin_return_address(2)
      
      At the reading of 0x8(%rax) Maneesh's machine would take a fault.
      The reason is that retint_careful did not set up the return address
      and the content of %rax here was zero.
      
      To verify this, I sent Maneesh a patch to create a frame pointer
      in retint_careful. He ran the test again but this time he would take
      the same type of fault from sysret_careful. The retint_careful was no
      longer an issue, but there are other callers that still have issues.
      
      Instead of adding frame pointers for all callers to schedule (in possibly
      all archs), it is much safer to simply not use CALLER_ADDR2. This
      loses out on knowing what called schedule, but the function tracer
      will help there if needed.
      Reported-by: NManeesh Soni <maneesh@in.ibm.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      301fd748
    • P
      perf_counter: minimize context time updates · bce379bf
      Peter Zijlstra 提交于
      Push the update_context_time() calls up the stack so that we get less
      invokations and thereby a less noisy output:
      
      before:
      
       # ./perfstat -e 1:0 -e 1:1 -e 1:1 -e 1:1 -l ls > /dev/null
      
       Performance counter stats for 'ls':
      
            10.163691  cpu clock ticks      (msecs)  (scaled from 98.94%)
            10.215360  task clock ticks     (msecs)  (scaled from 98.18%)
            10.185549  task clock ticks     (msecs)  (scaled from 98.53%)
            10.183581  task clock ticks     (msecs)  (scaled from 98.71%)
      
       Wall-clock time elapsed:    11.912858 msecs
      
      after:
      
       # ./perfstat -e 1:0 -e 1:1 -e 1:1 -e 1:1 -l ls > /dev/null
      
       Performance counter stats for 'ls':
      
             9.316630  cpu clock ticks      (msecs)
             9.280789  task clock ticks     (msecs)
             9.280789  task clock ticks     (msecs)
             9.280789  task clock ticks     (msecs)
      
       Wall-clock time elapsed:     9.574872 msecs
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.618876874@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bce379bf
    • P
      perf_counter: remove rq->lock usage · 849691a6
      Peter Zijlstra 提交于
      Now that all the task runtime clock users are gone, remove the ugly
      rq->lock usage from perf counters, which solves the nasty deadlock
      seen when a software task clock counter was read from an NMI overflow
      context.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.531137582@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      849691a6
    • P
      perf_counter: rework the task clock software counter · a39d6f25
      Peter Zijlstra 提交于
      Rework the task clock software counter to use the context time instead
      of the task runtime clock, this removes the last such user.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.445450972@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a39d6f25
    • P
      perf_counter: rework context time · 4af4998b
      Peter Zijlstra 提交于
      Since perf_counter_context is switched along with tasks, we can
      maintain the context time without using the task runtime clock.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.353552838@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4af4998b
    • P
      perf_counter: change event definition · 4c9e2542
      Peter Zijlstra 提交于
      Currently the definition of an event is slightly ambiguous. We have
      wakeup events, for poll() and SIGIO, which are either generated
      when a record crosses a page boundary (hw_events.wakeup_events == 0),
      or every wakeup_events new records.
      
      Now a record can be either a counter overflow record, or a number of
      different things, like the mmap PROT_EXEC region notifications.
      
      Then there is the PERF_COUNTER_IOC_REFRESH event limit, which only
      considers counter overflows.
      
      This patch changes then wakeup_events and SIGIO notification to only
      consider overflow events. Furthermore it changes the SIGIO notification
      to report SIGHUP when the event limit is reached and the counter will
      be disabled.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.266679874@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4c9e2542
    • P
      perf_counter: counter overflow limit · 79f14641
      Peter Zijlstra 提交于
      Provide means to auto-disable the counter after 'n' overflow events.
      
      Create the counter with hw_event.disabled = 1, and then issue an
      ioctl(fd, PREF_COUNTER_IOC_REFRESH, n); to set the limit and enable
      the counter.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.083139737@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      79f14641
    • P
      perf_counter: PERF_RECORD_TIME · 339f7c90
      Peter Zijlstra 提交于
      By popular request, provide means to log a timestamp along with the
      counter overflow event.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.024173282@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      339f7c90
    • P
      perf_counter: fix the mlock accounting · ebb3c4c4
      Peter Zijlstra 提交于
      Reading through the code I saw I forgot the finish the mlock accounting.
      Do so now.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094517.899767331@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ebb3c4c4
    • P
      perf_counter: theres more to overflow than writing events · f6c7d5fe
      Peter Zijlstra 提交于
      Prepare for more generic overflow handling. The new perf_counter_overflow()
      method will handle the generic bits of the counter overflow, and can return
      a !0 return value, in which case the counter should be (soft) disabled, so
      that it won't count until it's properly disabled.
      
      XXX: do powerpc and swcounter
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094517.812109629@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f6c7d5fe
    • P
      perf_counter: generalize pending infrastructure · 671dec5d
      Peter Zijlstra 提交于
      Prepare the pending infrastructure to do more than wakeups.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094517.634732847@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      671dec5d
    • P
      perf_counter: SIGIO support · 3c446b3d
      Peter Zijlstra 提交于
      Provide support for fcntl() I/O availability signals.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094517.579788800@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3c446b3d
    • P
      perf_counter: add more context information · 9c03d88e
      Peter Zijlstra 提交于
      Change the callchain context entries to u16, so as to gain some space.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094517.457320003@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9c03d88e
    • R
      Revert "module: remove the SHF_ALLOC flag on the __versions section." · 2e45e777
      Rusty Russell 提交于
      This reverts commit 9cb610d8.
      
      This was an impressively stupid patch.  Firstly, we reset the SHF_ALLOC
      flag lower down in the same function, so the patch was useless.  Even
      better, find_sec() ignores sections with SHF_ALLOC not set, so
      it breaks CONFIG_MODVERSIONS=y with CONFIG_MODULE_FORCE_LOAD=n, which
      refuses to load the module since it can't find the __versions section.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      2e45e777
    • O
      exit_notify: kill the wrong capable(CAP_KILL) check · 432870da
      Oleg Nesterov 提交于
      The CAP_KILL check in exit_notify() looks just wrong, kill it.
      
      Whatever logic we have to reset ->exit_signal, the malicious user
      can bypass it if it execs the setuid application before exiting.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Acked-by: NSerge Hallyn <serue@us.ibm.com>
      Acked-by: NRoland McGrath <roland@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      432870da
    • L
      kernel/sysctl.c: avoid annoying warnings · cd5f9a4c
      Linus Torvalds 提交于
      Some of the limit constants are used only depending on some complex
      configuration dependencies, yet it's not worth making the simple
      variables depend on those configuration details.  Just mark them as
      perhaps not being unused, and avoid the warning.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cd5f9a4c
  3. 06 4月, 2009 4 次提交