1. 14 5月, 2010 1 次提交
  2. 11 5月, 2010 1 次提交
    • C
      sched, wait: Use wrapper functions · a93d2f17
      Changli Gao 提交于
      epoll should not touch flags in wait_queue_t. This patch introduces a new
      function __add_wait_queue_exclusive(), for the users, who use wait queue as a
      LIFO queue.
      
      __add_wait_queue_tail_exclusive() is introduced too instead of
      add_wait_queue_exclusive_locked(). remove_wait_queue_locked() is removed, as
      it is a duplicate of __remove_wait_queue(), disliked by users, and with less
      users.
      Signed-off-by: NChangli Gao <xiaosuo@gmail.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Paul Menage <menage@google.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Davide Libenzi <davidel@xmailserver.org>
      Cc: <containers@lists.linux-foundation.org>
      LKML-Reference: <1273214006-2979-1-git-send-email-xiaosuo@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a93d2f17
  3. 10 5月, 2010 7 次提交
  4. 08 5月, 2010 1 次提交
    • T
      cpu_stop: add dummy implementation for UP · bbf1bb3e
      Tejun Heo 提交于
      When !CONFIG_SMP, cpu_stop functions weren't defined at all which
      could lead to build failures if UP code uses cpu_stop facility.  Add
      dummy cpu_stop implementation for UP.  The waiting variants execute
      the work function directly with preempt disabled and
      stop_one_cpu_nowait() schedules a workqueue work.
      
      Makefile and ifdefs around stop_machine implementation are updated to
      accomodate CONFIG_SMP && !CONFIG_STOP_MACHINE case.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NIngo Molnar <mingo@elte.hu>
      bbf1bb3e
  5. 07 5月, 2010 7 次提交
    • P
      sched: Remove rq argument to the tracepoints · 27a9da65
      Peter Zijlstra 提交于
      struct rq isn't visible outside of sched.o so its near useless to
      expose the pointer, also there are no users of it, so remove it.
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1272997616.1642.207.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      27a9da65
    • P
      rcu: need barrier() in UP synchronize_sched_expedited() · fc390cde
      Paul E. McKenney 提交于
      If synchronize_sched_expedited() is ever to be called from within
      kernel/sched.c in a !SMP PREEMPT kernel, the !SMP implementation needs
      a barrier().
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      fc390cde
    • P
      sched: correctly place paranioa memory barriers in synchronize_sched_expedited() · cc631fb7
      Paul E. McKenney 提交于
      The memory barriers must be in the SMP case, not in the !SMP case.
      Also add a barrier after the atomic_inc() in order to ensure that
      other CPUs see post-synchronize_sched_expedited() actions as following
      the expedited grace period.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      cc631fb7
    • T
      sched: kill paranoia check in synchronize_sched_expedited() · 94458d5e
      Tejun Heo 提交于
      The paranoid check which verifies that the cpu_stop callback is
      actually called on all online cpus is completely superflous.  It's
      guaranteed by cpu_stop facility and if it didn't work as advertised
      other things would go horribly wrong and trying to recover using
      synchronize_sched() wouldn't be very meaningful.
      
      Kill the paranoid check.  Removal of this feature is done as a
      separate step so that it can serve as a bisection point if something
      actually goes wrong.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Dipankar Sarma <dipankar@in.ibm.com>
      Cc: Josh Triplett <josh@freedesktop.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Dimitri Sivanich <sivanich@sgi.com>
      94458d5e
    • T
      sched: replace migration_thread with cpu_stop · 969c7921
      Tejun Heo 提交于
      Currently migration_thread is serving three purposes - migration
      pusher, context to execute active_load_balance() and forced context
      switcher for expedited RCU synchronize_sched.  All three roles are
      hardcoded into migration_thread() and determining which job is
      scheduled is slightly messy.
      
      This patch kills migration_thread and replaces all three uses with
      cpu_stop.  The three different roles of migration_thread() are
      splitted into three separate cpu_stop callbacks -
      migration_cpu_stop(), active_load_balance_cpu_stop() and
      synchronize_sched_expedited_cpu_stop() - and each use case now simply
      asks cpu_stop to execute the callback as necessary.
      
      synchronize_sched_expedited() was implemented with private
      preallocated resources and custom multi-cpu queueing and waiting
      logic, both of which are provided by cpu_stop.
      synchronize_sched_expedited_count is made atomic and all other shared
      resources along with the mutex are dropped.
      
      synchronize_sched_expedited() also implemented a check to detect cases
      where not all the callback got executed on their assigned cpus and
      fall back to synchronize_sched().  If called with cpu hotplug blocked,
      cpu_stop already guarantees that and the condition cannot happen;
      otherwise, stop_machine() would break.  However, this patch preserves
      the paranoid check using a cpumask to record on which cpus the stopper
      ran so that it can serve as a bisection point if something actually
      goes wrong theree.
      
      Because the internal execution state is no longer visible,
      rcu_expedited_torture_stats() is removed.
      
      This patch also renames cpu_stop threads to from "stopper/%d" to
      "migration/%d".  The names of these threads ultimately don't matter
      and there's no reason to make unnecessary userland visible changes.
      
      With this patch applied, stop_machine() and sched now share the same
      resources.  stop_machine() is faster without wasting any resources and
      sched migration users are much cleaner.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Dipankar Sarma <dipankar@in.ibm.com>
      Cc: Josh Triplett <josh@freedesktop.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Dimitri Sivanich <sivanich@sgi.com>
      969c7921
    • T
      stop_machine: reimplement using cpu_stop · 3fc1f1e2
      Tejun Heo 提交于
      Reimplement stop_machine using cpu_stop.  As cpu stoppers are
      guaranteed to be available for all online cpus,
      stop_machine_create/destroy() are no longer necessary and removed.
      
      With resource management and synchronization handled by cpu_stop, the
      new implementation is much simpler.  Asking the cpu_stop to execute
      the stop_cpu() state machine on all online cpus with cpu hotplug
      disabled is enough.
      
      stop_machine itself doesn't need to manage any global resources
      anymore, so all per-instance information is rolled into struct
      stop_machine_data and the mutex and all static data variables are
      removed.
      
      The previous implementation created and destroyed RT workqueues as
      necessary which made stop_machine() calls highly expensive on very
      large machines.  According to Dimitri Sivanich, preventing the dynamic
      creation/destruction makes booting faster more than twice on very
      large machines.  cpu_stop resources are preallocated for all online
      cpus and should have the same effect.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Dimitri Sivanich <sivanich@sgi.com>
      3fc1f1e2
    • T
      cpu_stop: implement stop_cpu[s]() · 1142d810
      Tejun Heo 提交于
      Implement a simplistic per-cpu maximum priority cpu monopolization
      mechanism.  A non-sleeping callback can be scheduled to run on one or
      multiple cpus with maximum priority monopolozing those cpus.  This is
      primarily to replace and unify RT workqueue usage in stop_machine and
      scheduler migration_thread which currently is serving multiple
      purposes.
      
      Four functions are provided - stop_one_cpu(), stop_one_cpu_nowait(),
      stop_cpus() and try_stop_cpus().
      
      This is to allow clean sharing of resources among stop_cpu and all the
      migration thread users.  One stopper thread per cpu is created which
      is currently named "stopper/CPU".  This will eventually replace the
      migration thread and take on its name.
      
      * This facility was originally named cpuhog and lived in separate
        files but Peter Zijlstra nacked the name and thus got renamed to
        cpu_stop and moved into stop_machine.c.
      
      * Better reporting of preemption leak as per Peter's suggestion.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Dimitri Sivanich <sivanich@sgi.com>
      1142d810
  6. 06 5月, 2010 1 次提交
  7. 05 5月, 2010 1 次提交
  8. 28 4月, 2010 5 次提交
    • S
      tracing: Fix sleep time function profiling · 37e44bc5
      Steven Rostedt 提交于
      When sleep_time is off the function profiler ignores the time that a task
      is scheduled out. When the task is scheduled out a timestamp is taken.
      When the task is scheduled back in, the timestamp is compared to the
      current time and the saved calltimes are adjusted accordingly.
      
      But when stopping the function profiler, the sched switch hook that
      does this adjustment was stopped before shutting down the tracer.
      This allowed some tasks to not get their timestamps set when they
      scheduled out. When the function profiler started again, this would
      skew the times of the scheduler functions.
      
      This patch moves the stopping of the sched switch to after the function
      profiler is stopped. It also ignores zero set calltimes, which may
      happen on start up.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      37e44bc5
    • C
      tracing: Show sample std dev in function profiling · e330b3bc
      Chase Douglas 提交于
      When combined with function graph tracing the ftrace function profiler
      also prints the average run time of functions. While this gives us some
      good information, it doesn't tell us anything about the variance of the
      run times of the function. This change prints out the s^2 sample
      standard deviation alongside the average.
      
      This change adds one entry to the profile record structure. This
      increases the memory footprint of the function profiler by 1/3 on a
      32-bit system, and by 1/5 on a 64-bit system when function graphing is
      enabled, though the memory is only allocated when the profiler is turned
      on. During the profiling, one extra line of code adds the squared
      calltime to the new record entry, so this should not adversly affect
      performance.
      
      Note that the square of the sample standard deviation is printed because
      there is no sqrt implementation for unsigned long long in the kernel.
      Signed-off-by: NChase Douglas <chase.douglas@canonical.com>
      LKML-Reference: <1272304925-2436-1-git-send-email-chase.douglas@canonical.com>
      
      [ fixed comment about ns^2 -> us^2 conversion ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e330b3bc
    • S
      ring-buffer: Make benchmark handle missed events · a838b2e6
      Steven Rostedt 提交于
      With the addition of the "missed events" flags that is stored in the
      commit field of the ring buffer page, the ring_buffer_benchmark
      was not updated to handle this. If events are missed, then the
      missed events flag is set in the ring buffer page, the benchmark
      will count that flag as part of the size of the page and will hit the BUG()
      when it tries to read beyond the page.
      
      The solution is simply to have the ring buffer benchmark mask off
      the extra bits.
      Reported-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a838b2e6
    • D
      ring-buffer: Make non-consuming read less expensive with lots of cpus. · 72c9ddfd
      David Miller 提交于
      When performing a non-consuming read, a synchronize_sched() is
      performed once for every cpu which is actively tracing.
      
      This is very expensive, and can make it take several seconds to open
      up the 'trace' file with lots of cpus.
      
      Only one synchronize_sched() call is actually necessary.  What is
      desired is for all cpus to see the disabling state change.  So we
      transform the existing sequence:
      
      	for_each_cpu() {
      		ring_buffer_read_start();
      	}
      
      where each ring_buffer_start() call performs a synchronize_sched(),
      into the following:
      
      	for_each_cpu() {
      		ring_buffer_read_prepare();
      	}
      	ring_buffer_read_prepare_sync();
      	for_each_cpu() {
      		ring_buffer_read_start();
      	}
      
      wherein only the single ring_buffer_read_prepare_sync() call needs to
      do the synchronize_sched().
      
      The first phase, via ring_buffer_read_prepare(), allocates the 'iter'
      memory and increments ->record_disabled.
      
      In the second phase, ring_buffer_read_prepare_sync() makes sure this
      ->record_disabled state is visible fully to all cpus.
      
      And in the final third phase, the ring_buffer_read_start() calls reset
      the 'iter' objects allocated in the first phase since we now know that
      none of the cpus are adding trace entries any more.
      
      This makes openning the 'trace' file nearly instantaneous on a
      sparc64 Niagara2 box with 128 cpus tracing.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      LKML-Reference: <20100420.154711.11246950.davem@davemloft.net>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      72c9ddfd
    • J
      tracing: Add graph output support for irqsoff tracer · 62b915f1
      Jiri Olsa 提交于
      Add function graph output to irqsoff tracer.
      
      The graph output is enabled by setting new 'display-graph' trace option.
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      LKML-Reference: <1270227683-14631-4-git-send-email-jolsa@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      62b915f1
  9. 27 4月, 2010 2 次提交
  10. 25 4月, 2010 1 次提交
  11. 23 4月, 2010 3 次提交
    • S
      sched: Fix select_idle_sibling() logic in select_task_rq_fair() · 99bd5e2f
      Suresh Siddha 提交于
      Issues in the current select_idle_sibling() logic in select_task_rq_fair()
      in the context of a task wake-up:
      
      a) Once we select the idle sibling, we use that domain (spanning the cpu that
         the task is currently woken-up and the idle sibling that we found) in our
         wake_affine() decisions. This domain is completely different from the
         domain(we are supposed to use) that spans the cpu that the task currently
         woken-up and the cpu where the task previously ran.
      
      b) We do select_idle_sibling() check only for the cpu that the task is
         currently woken-up on. If select_task_rq_fair() selects the previously run
         cpu for waking the task, doing a select_idle_sibling() check
         for that cpu also helps and we don't do this currently.
      
      c) In the scenarios where the cpu that the task is woken-up is busy but
         with its HT siblings are idle, we are selecting the task be woken-up
         on the idle HT sibling instead of a core that it previously ran
         and currently completely idle. i.e., we are not taking decisions based on
         wake_affine() but directly selecting an idle sibling that can cause
         an imbalance at the SMT/MC level which will be later corrected by the
         periodic load balancer.
      
      Fix this by first going through the load imbalance calculations using
      wake_affine() and once we make a decision of woken-up cpu vs previously-ran cpu,
      then choose a possible idle sibling for waking up the task on.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1270079265.7835.8.camel@sbs-t61.sc.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      99bd5e2f
    • P
      sched: Pre-compute cpumask_weight(sched_domain_span(sd)) · 669c55e9
      Peter Zijlstra 提交于
      Dave reported that his large SPARC machines spend lots of time in
      hweight64(), try and optimize some of those needless cpumask_weight()
      invocations (esp. with the large offstack cpumasks these are very
      expensive indeed).
      Reported-by: NDavid Miller <davem@davemloft.net>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      669c55e9
    • P
      sched: Cure load average vs NO_HZ woes · 74f5187a
      Peter Zijlstra 提交于
      Chase reported that due to us decrementing calc_load_task prematurely
      (before the next LOAD_FREQ sample), the load average could be scewed
      by as much as the number of CPUs in the machine.
      
      This patch, based on Chase's patch, cures the problem by keeping the
      delta of the CPU going into NO_HZ idle separately and folding that in
      on the next LOAD_FREQ update.
      
      This restores the balance and we get strict LOAD_FREQ period samples.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NChase Douglas <chase.douglas@canonical.com>
      LKML-Reference: <1271934490.1776.343.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      74f5187a
  12. 22 4月, 2010 2 次提交
    • D
      CRED: Fix a race in creds_are_invalid() in credentials debugging · e134d200
      David Howells 提交于
      creds_are_invalid() reads both cred->usage and cred->subscribers and then
      compares them to make sure the number of processes subscribed to a cred struct
      never exceeds the refcount of that cred struct.
      
      The problem is that this can cause a race with both copy_creds() and
      exit_creds() as the two counters, whilst they are of atomic_t type, are only
      atomic with respect to themselves, and not atomic with respect to each other.
      
      This means that if creds_are_invalid() can read the values on one CPU whilst
      they're being modified on another CPU, and so can observe an evolving state in
      which the subscribers count now is greater than the usage count a moment
      before.
      
      Switching the order in which the counts are read cannot help, so the thing to
      do is to remove that particular check.
      
      I had considered rechecking the values to see if they're in flux if the test
      fails, but I can't guarantee they won't appear the same, even if they've
      changed several times in the meantime.
      
      Note that this can only happen if CONFIG_DEBUG_CREDENTIALS is enabled.
      
      The problem is only likely to occur with multithreaded programs, and can be
      tested by the tst-eintr1 program from glibc's "make check".  The symptoms look
      like:
      
      	CRED: Invalid credentials
      	CRED: At include/linux/cred.h:240
      	CRED: Specified credentials: ffff88003dda5878 [real][eff]
      	CRED: ->magic=43736564, put_addr=(null)
      	CRED: ->usage=766, subscr=766
      	CRED: ->*uid = { 0,0,0,0 }
      	CRED: ->*gid = { 0,0,0,0 }
      	CRED: ->security is ffff88003d72f538
      	CRED: ->security {359, 359}
      	------------[ cut here ]------------
      	kernel BUG at kernel/cred.c:850!
      	...
      	RIP: 0010:[<ffffffff81049889>]  [<ffffffff81049889>] __invalid_creds+0x4e/0x52
      	...
      	Call Trace:
      	 [<ffffffff8104a37b>] copy_creds+0x6b/0x23f
      
      Note the ->usage=766 and subscr=766.  The values appear the same because
      they've been re-read since the check was made.
      Reported-by: NRoland McGrath <roland@redhat.com>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NJames Morris <jmorris@namei.org>
      e134d200
    • F
      tracing: Dump either the oops's cpu source or all cpus buffers · cecbca96
      Frederic Weisbecker 提交于
      The ftrace_dump_on_oops kernel parameter, sysctl and sysrq let one
      dump every cpu buffers when an oops or panic happens.
      
      It's nice when you have few cpus but it may take ages if have many,
      plus you miss the real origin of the problem in all the cpu traces.
      
      Sometimes, all you need is to dump the cpu buffer that triggered the
      opps, most of the time it is our main interest.
      
      This patch modifies ftrace_dump_on_oops to handle this choice.
      
      The ftrace_dump_on_oops kernel parameter, when it comes alone, has
      the same behaviour than before. But ftrace_dump_on_oops=orig_cpu
      will only dump the buffer of the cpu that oops'ed.
      
      Similarly, sysctl kernel.ftrace_dump_on_oops=1 and
      echo 1 > /proc/sys/kernel/ftrace_dump_on_oops keep their previous
      behaviour. But setting 2 jumps into cpu origin dump mode.
      
      v2: Fix double setup
      v3: Fix spelling issues reported by Randy Dunlap
      v4: Also update __ftrace_dump in the selftests
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      cecbca96
  13. 21 4月, 2010 1 次提交
    • D
      CRED: Fix double free in prepare_usermodehelper_creds() error handling · eff30363
      David Howells 提交于
      Patch 570b8fb5:
      
      	Author: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      	Date:   Tue Mar 30 00:04:00 2010 +0100
      	Subject: CRED: Fix memory leak in error handling
      
      attempts to fix a memory leak in the error handling by making the offending
      return statement into a jump down to the bottom of the function where a
      kfree(tgcred) is inserted.
      
      This is, however, incorrect, as it does a kfree() after doing put_cred() if
      security_prepare_creds() fails.  That will result in a double free if 'error'
      is jumped to as put_cred() will also attempt to free the new tgcred record by
      virtue of it being pointed to by the new cred record.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NJames Morris <jmorris@namei.org>
      eff30363
  14. 19 4月, 2010 1 次提交
  15. 15 4月, 2010 1 次提交
  16. 11 4月, 2010 1 次提交
  17. 07 4月, 2010 1 次提交
  18. 06 4月, 2010 3 次提交
    • A
      sched: Fix sched_getaffinity() · 84fba5ec
      Anton Blanchard 提交于
      taskset on 2.6.34-rc3 fails on one of my ppc64 test boxes with
      the following error:
      
        sched_getaffinity(0, 16, 0x10029650030) = -1 EINVAL (Invalid argument)
      
      This box has 128 threads and 16 bytes is enough to cover it.
      
      Commit cd3d8031 (sched:
      sched_getaffinity(): Allow less than NR_CPUS length) is
      comparing this 16 bytes agains nr_cpu_ids.
      
      Fix it by comparing nr_cpu_ids to the number of bits in the
      cpumask we pass in.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Sharyathi Nagesh <sharyath@in.ibm.com>
      Cc: Ulrich Drepper <drepper@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jack Steiner <steiner@sgi.com>
      Cc: Russ Anderson <rja@sgi.com>
      Cc: Mike Travis <travis@sgi.com>
      LKML-Reference: <20100406070218.GM5594@kryten>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      84fba5ec
    • N
      Fix up possibly racy module refcounting · 5fbfb18d
      Nick Piggin 提交于
      Module refcounting is implemented with a per-cpu counter for speed.
      However there is a race when tallying the counter where a reference may
      be taken by one CPU and released by another.  Reference count summation
      may then see the decrement without having seen the previous increment,
      leading to lower than expected count.  A module which never has its
      actual reference drop below 1 may return a reference count of 0 due to
      this race.
      
      Module removal generally runs under stop_machine, which prevents this
      race causing bugs due to removal of in-use modules.  However there are
      other real bugs in module.c code and driver code (module_refcount is
      exported) where the callers do not run under stop_machine.
      
      Fix this by maintaining running per-cpu counters for the number of
      module refcount increments and the number of refcount decrements.  The
      increments are tallied after the decrements, so any decrement seen will
      always have its corresponding increment counted.  The final refcount is
      the difference of the total increments and decrements, preventing a
      low-refcount from being returned.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5fbfb18d
    • E
      audit: preface audit printk with audit · 449cedf0
      Eric Paris 提交于
      There have been a number of reports of people seeing the message:
      "name_count maxed, losing inode data: dev=00:05, inode=3185"
      in dmesg.  These usually lead to people reporting problems to the filesystem
      group who are in turn clueless what they mean.
      
      Eventually someone finds me and I explain what is going on and that
      these come from the audit system.  The basics of the problem is that the
      audit subsystem never expects a single syscall to 'interact' (for some
      wish washy meaning of interact) with more than 20 inodes.  But in fact
      some operations like loading kernel modules can cause changes to lots of
      inodes in debugfs.
      
      There are a couple real fixes being bandied about including removing the
      fixed compile time limit of 20 or not auditing changes in debugfs (or
      both) but neither are small and obvious so I am not sending them for
      immediate inclusion (I hope Al forwards a real solution next devel
      window).
      
      In the meantime this patch simply adds 'audit' to the beginning of the
      crap message so if a user sees it, they come blame me first and we can
      talk about what it means and make sure we understand all of the reasons
      it can happen and make sure this gets solved correctly in the long run.
      Signed-off-by: NEric Paris <eparis@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      449cedf0