1. 18 2月, 2015 1 次提交
  2. 14 12月, 2014 4 次提交
  3. 05 12月, 2014 5 次提交
  4. 04 12月, 2014 1 次提交
  5. 20 11月, 2014 1 次提交
    • A
      new helper: audit_file() · 9f45f5bf
      Al Viro 提交于
      ... for situations when we don't have any candidate in pathnames - basically,
      in descriptor-based syscalls.
      
      [Folded the build fix for !CONFIG_AUDITSYSCALL configs from Chen Gang]
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      9f45f5bf
  6. 14 10月, 2014 4 次提交
  7. 09 9月, 2014 1 次提交
  8. 09 8月, 2014 2 次提交
    • J
      shm: allow exit_shm in parallel if only marking orphans · 83293c0f
      Jack Miller 提交于
      If shm_rmid_force (the default state) is not set then the shmids are only
      marked as orphaned and does not require any add, delete, or locking of the
      tree structure.
      
      Seperate the sysctl on and off case, and only obtain the read lock.  The
      newly added list head can be deleted under the read lock because we are
      only called with current and will only change the semids allocated by this
      task and not manipulate the list.
      
      This commit assumes that up_read includes a sufficient memory barrier for
      the writes to be seen my others that later obtain a write lock.
      Signed-off-by: NMilton Miller <miltonm@bga.com>
      Signed-off-by: NJack Miller <millerjo@us.ibm.com>
      Cc: Davidlohr Bueso <davidlohr@hp.com>
      Cc: Manfred Spraul <manfred@colorfullife.com>
      Cc: Anton Blanchard <anton@samba.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      83293c0f
    • J
      shm: make exit_shm work proportional to task activity · ab602f79
      Jack Miller 提交于
      This is small set of patches our team has had kicking around for a few
      versions internally that fixes tasks getting hung on shm_exit when there
      are many threads hammering it at once.
      
      Anton wrote a simple test to cause the issue:
      
        http://ozlabs.org/~anton/junkcode/bust_shm_exit.c
      
      Before applying this patchset, this test code will cause either hanging
      tracebacks or pthread out of memory errors.
      
      After this patchset, it will still produce output like:
      
        root@somehost:~# ./bust_shm_exit 1024 160
        ...
        INFO: rcu_sched detected stalls on CPUs/tasks: {} (detected by 116, t=2111 jiffies, g=241, c=240, q=7113)
        INFO: Stall ended before state dump start
        ...
      
      But the task will continue to run along happily, so we consider this an
      improvement over hanging, even if it's a bit noisy.
      
      This patch (of 3):
      
      exit_shm obtains the ipc_ns shm rwsem for write and holds it while it
      walks every shared memory segment in the namespace.  Thus the amount of
      work is related to the number of shm segments in the namespace not the
      number of segments that might need to be cleaned.
      
      In addition, this occurs after the task has been notified the thread has
      exited, so the number of tasks waiting for the ns shm rwsem can grow
      without bound until memory is exausted.
      
      Add a list to the task struct of all shmids allocated by this task.  Init
      the list head in copy_process.  Use the ns->rwsem for locking.  Add
      segments after id is added, remove before removing from id.
      
      On unshare of NEW_IPCNS orphan any ids as if the task had exited, similar
      to handling of semaphore undo.
      
      I chose a define for the init sequence since its a simple list init,
      otherwise it would require a function call to avoid include loops between
      the semaphore code and the task struct.  Converting the list_del to
      list_del_init for the unshare cases would remove the exit followed by
      init, but I left it blow up if not inited.
      Signed-off-by: NMilton Miller <miltonm@bga.com>
      Signed-off-by: NJack Miller <millerjo@us.ibm.com>
      Cc: Davidlohr Bueso <davidlohr@hp.com>
      Cc: Manfred Spraul <manfred@colorfullife.com>
      Cc: Anton Blanchard <anton@samba.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ab602f79
  9. 30 7月, 2014 1 次提交
    • E
      namespaces: Use task_lock and not rcu to protect nsproxy · 728dba3a
      Eric W. Biederman 提交于
      The synchronous syncrhonize_rcu in switch_task_namespaces makes setns
      a sufficiently expensive system call that people have complained.
      
      Upon inspect nsproxy no longer needs rcu protection for remote reads.
      remote reads are rare.  So optimize for same process reads and write
      by switching using rask_lock instead.
      
      This yields a simpler to understand lock, and a faster setns system call.
      
      In particular this fixes a performance regression observed
      by Rafael David Tinoco <rafael.tinoco@canonical.com>.
      
      This is effectively a revert of Pavel Emelyanov's commit
      cf7b708c Make access to task's nsproxy lighter
      from 2007.  The race this originialy fixed no longer exists as
      do_notify_parent uses task_active_pid_ns(parent) instead of
      parent->nsproxy.
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      728dba3a
  10. 07 6月, 2014 16 次提交
  11. 08 4月, 2014 2 次提交
  12. 17 3月, 2014 1 次提交
    • M
      ipc: Fix 2 bugs in msgrcv() MSG_COPY implementation · 4f87dac3
      Michael Kerrisk 提交于
      While testing and documenting the msgrcv() MSG_COPY flag that Stanislav
      Kinsbursky added in commit 4a674f34 ("ipc: introduce message queue
      copy feature" => kernel 3.8), I discovered a couple of bugs in the
      implementation.  The two bugs concern MSG_COPY interactions with other
      msgrcv() flags, namely:
      
       (A) MSG_COPY + MSG_EXCEPT
       (B) MSG_COPY + !IPC_NOWAIT
      
      The bugs are distinct (and the fix for the first one is obvious),
      however my fix for both is a single-line patch, which is why I'm
      combining them in a single mail, rather than writing two mails+patches.
      
       ===== (A) MSG_COPY + MSG_EXCEPT =====
      
      With the addition of the MSG_COPY flag, there are now two msgrcv()
      flags--MSG_COPY and MSG_EXCEPT--that modify the meaning of the 'msgtyp'
      argument in unrelated ways.  Specifying both in the same call is a
      logical error that is currently permitted, with the effect that MSG_COPY
      has priority and MSG_EXCEPT is ignored.  The call should give an error
      if both flags are specified.  The patch below implements that behavior.
      
       ===== (B) (B) MSG_COPY + !IPC_NOWAIT =====
      
      The test code that was submitted in commit 3a665531 ("selftests: IPC
      message queue copy feature test") shows MSG_COPY being used in
      conjunction with IPC_NOWAIT.  In other words, if there is no message at
      the position 'msgtyp'.  return immediately with the error in ENOMSG.
      
      What was not (fully) tested is the behavior if MSG_COPY is specified
      *without* IPC_NOWAIT, and there is an odd behavior.  If the queue
      contains less than 'msgtyp' messages, then the call blocks until the
      next message is written to the queue.  At that point, the msgrcv() call
      returns a copy of the newly added message, regardless of whether that
      message is at the ordinal position 'msgtyp'.  This is clearly bogus, and
      problematic for applications that might want to make use of the MSG_COPY
      flag.
      
      I considered the following possible solutions to this problem:
      
       (1) Force the call to block until a message *does* appear at the
           position 'msgtyp'.
      
       (2) If the MSG_COPY flag is specified, the kernel should implicitly add
           IPC_NOWAIT, so that the call fails with ENOMSG for this case.
      
       (3) If the MSG_COPY flag is specified, but IPC_NOWAIT is not, generate
           an error (probably, EINVAL is the right one).
      
      I do not know if any application would really want to have the
      functionality of solution (1), especially since an application can
      determine in advance the number of messages in the queue using msgctl()
      IPC_STAT.  Obviously, this solution would be the most work to implement.
      
      Solution (2) would have the effect of silently fixing any applications
      that tried to employ broken behavior.  However, it would mean that if we
      later decided to implement solution (1), then user-space could not
      easily detect what the kernel supports (but, since I'm somewhat doubtful
      that solution (1) is needed, I'm not sure that this is much of a
      problem).
      
      Solution (3) would have the effect of informing broken applications that
      they are doing something broken.  The downside is that this would cause
      a ABI breakage for any applications that are currently employing the
      broken behavior.  However:
      
      a) Those applications are almost certainly not getting the results they
         expect.
      b) Possibly, those applications don't even exist, because MSG_COPY is
         currently hidden behind CONFIG_CHECKPOINT_RESTORE.
      
      The upside of solution (3) is that if we later decided to implement
      solution (1), user-space could determine what the kernel supports, via
      the error return.
      
      In my view, solution (3) is mildly preferable to solution (2), and
      solution (1) could still be done later if anyone really cares.  The
      patch below implements solution (3).
      
      PS.  For anyone out there still listening, it's the usual story:
      documenting an API (and the thinking about, and the testing of the API,
      that documentation entails) is the one of the single best ways of
      finding bugs in the API, as I've learned from a lot of experience.  Best
      to do that documentation before releasing the API.
      Signed-off-by: NMichael Kerrisk <mtk.manpages@gmail.com>
      Acked-by: NStanislav Kinsbursky <skinsbursky@parallels.com>
      Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
      Cc: stable@vger.kernel.org
      Cc: Serge Hallyn <serge.hallyn@canonical.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4f87dac3
  13. 06 3月, 2014 1 次提交