1. 15 4月, 2013 1 次提交
  2. 13 4月, 2013 2 次提交
  3. 12 4月, 2013 1 次提交
  4. 10 4月, 2013 1 次提交
  5. 09 4月, 2013 4 次提交
    • H
      PM / reboot: call syscore_shutdown() after disable_nonboot_cpus() · 6f389a8f
      Huacai Chen 提交于
      As commit 40dc166c (PM / Core: Introduce struct syscore_ops for core
      subsystems PM) say, syscore_ops operations should be carried with one
      CPU on-line and interrupts disabled. However, after commit f96972f2
      (kernel/sys.c: call disable_nonboot_cpus() in kernel_restart()),
      syscore_shutdown() is called before disable_nonboot_cpus(), so break
      the rules. We have a MIPS machine with a 8259A PIC, and there is an
      external timer (HPET) linked at 8259A. Since 8259A has been shutdown
      too early (by syscore_shutdown()), disable_nonboot_cpus() runs without
      timer interrupt, so it hangs and reboot fails. This patch call
      syscore_shutdown() a little later (after disable_nonboot_cpus()) to
      avoid reboot failure, this is the same way as poweroff does.
      
      For consistency, add disable_nonboot_cpus() to kernel_halt().
      Signed-off-by: NHuacai Chen <chenhc@lemote.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      6f389a8f
    • S
      ftrace: Do not call stub functions in control loop · 395b97a3
      Steven Rostedt (Red Hat) 提交于
      The function tracing control loop used by perf spits out a warning
      if the called function is not a control function. This is because
      the control function references a per cpu allocated data structure
      on struct ftrace_ops that is not allocated for other types of
      functions.
      
      commit 0a016409 "ftrace: Optimize the function tracer list loop"
      
      Had an optimization done to all function tracing loops to optimize
      for a single registered ops. Unfortunately, this allows for a slight
      race when tracing starts or ends, where the stub function might be
      called after the current registered ops is removed. In this case we
      get the following dump:
      
      root# perf stat -e ftrace:function sleep 1
      [   74.339105] WARNING: at include/linux/ftrace.h:209 ftrace_ops_control_func+0xde/0xf0()
      [   74.349522] Hardware name: PRIMERGY RX200 S6
      [   74.357149] Modules linked in: sg igb iTCO_wdt ptp pps_core iTCO_vendor_support i7core_edac dca lpc_ich i2c_i801 coretemp edac_core crc32c_intel mfd_core ghash_clmulni_intel dm_multipath acpi_power_meter pcspk
      r microcode vhost_net tun macvtap macvlan nfsd kvm_intel kvm auth_rpcgss nfs_acl lockd sunrpc uinput xfs libcrc32c sd_mod crc_t10dif sr_mod cdrom mgag200 i2c_algo_bit drm_kms_helper ttm qla2xxx mptsas ahci drm li
      bahci scsi_transport_sas mptscsih libata scsi_transport_fc i2c_core mptbase scsi_tgt dm_mirror dm_region_hash dm_log dm_mod
      [   74.446233] Pid: 1377, comm: perf Tainted: G        W    3.9.0-rc1 #1
      [   74.453458] Call Trace:
      [   74.456233]  [<ffffffff81062e3f>] warn_slowpath_common+0x7f/0xc0
      [   74.462997]  [<ffffffff810fbc60>] ? rcu_note_context_switch+0xa0/0xa0
      [   74.470272]  [<ffffffff811041a2>] ? __unregister_ftrace_function+0xa2/0x1a0
      [   74.478117]  [<ffffffff81062e9a>] warn_slowpath_null+0x1a/0x20
      [   74.484681]  [<ffffffff81102ede>] ftrace_ops_control_func+0xde/0xf0
      [   74.491760]  [<ffffffff8162f400>] ftrace_call+0x5/0x2f
      [   74.497511]  [<ffffffff8162f400>] ? ftrace_call+0x5/0x2f
      [   74.503486]  [<ffffffff8162f400>] ? ftrace_call+0x5/0x2f
      [   74.509500]  [<ffffffff810fbc65>] ? synchronize_sched+0x5/0x50
      [   74.516088]  [<ffffffff816254d5>] ? _cond_resched+0x5/0x40
      [   74.522268]  [<ffffffff810fbc65>] ? synchronize_sched+0x5/0x50
      [   74.528837]  [<ffffffff811041a2>] ? __unregister_ftrace_function+0xa2/0x1a0
      [   74.536696]  [<ffffffff816254d5>] ? _cond_resched+0x5/0x40
      [   74.542878]  [<ffffffff8162402d>] ? mutex_lock+0x1d/0x50
      [   74.548869]  [<ffffffff81105c67>] unregister_ftrace_function+0x27/0x50
      [   74.556243]  [<ffffffff8111eadf>] perf_ftrace_event_register+0x9f/0x140
      [   74.563709]  [<ffffffff816254d5>] ? _cond_resched+0x5/0x40
      [   74.569887]  [<ffffffff8162402d>] ? mutex_lock+0x1d/0x50
      [   74.575898]  [<ffffffff8111e94e>] perf_trace_destroy+0x2e/0x50
      [   74.582505]  [<ffffffff81127ba9>] tp_perf_event_destroy+0x9/0x10
      [   74.589298]  [<ffffffff811295d0>] free_event+0x70/0x1a0
      [   74.595208]  [<ffffffff8112a579>] perf_event_release_kernel+0x69/0xa0
      [   74.602460]  [<ffffffff816254d5>] ? _cond_resched+0x5/0x40
      [   74.608667]  [<ffffffff8112a640>] put_event+0x90/0xc0
      [   74.614373]  [<ffffffff8112a740>] perf_release+0x10/0x20
      [   74.620367]  [<ffffffff811a3044>] __fput+0xf4/0x280
      [   74.625894]  [<ffffffff811a31de>] ____fput+0xe/0x10
      [   74.631387]  [<ffffffff81083697>] task_work_run+0xa7/0xe0
      [   74.637452]  [<ffffffff81014981>] do_notify_resume+0x71/0xb0
      [   74.643843]  [<ffffffff8162fa92>] int_signal+0x12/0x17
      
      To fix this a new ftrace_ops flag is added that denotes the ftrace_list_end
      ftrace_ops stub as just that, a stub. This flag is now checked in the
      control loop and the function is not called if the flag is set.
      
      Thanks to Jovi for not just reporting the bug, but also pointing out
      where the bug was in the code.
      
      Link: http://lkml.kernel.org/r/514A8855.7090402@redhat.com
      Link: http://lkml.kernel.org/r/1364377499-1900-15-git-send-email-jovi.zhangwei@huawei.comTested-by: NWANG Chao <chaowang@redhat.com>
      Reported-by: NWANG Chao <chaowang@redhat.com>
      Reported-by: Nzhangwei(Jovi) <jovi.zhangwei@huawei.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      395b97a3
    • J
      ftrace: Consistently restore trace function on sysctl enabling · 5000c418
      Jan Kiszka 提交于
      If we reenable ftrace via syctl, we currently set ftrace_trace_function
      based on the previous simplistic algorithm. This is inconsistent with
      what update_ftrace_function does. So better call that helper instead.
      
      Link: http://lkml.kernel.org/r/5151D26F.1070702@siemens.com
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5000c418
    • S
      tracing: Fix race with update_max_tr_single and changing tracers · 2930e04d
      Steven Rostedt (Red Hat) 提交于
      The commit 34600f0e "tracing: Fix race with max_tr and changing tracers"
      fixed the updating of the main buffers with the race of changing
      tracers, but left out the fix to the updating of just a per cpu buffer.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2930e04d
  6. 08 4月, 2013 3 次提交
  7. 01 4月, 2013 1 次提交
    • P
      Revert "lockdep: check that no locks held at freeze time" · dbf520a9
      Paul Walmsley 提交于
      This reverts commit 6aa97070.
      
      Commit 6aa97070 ("lockdep: check that no locks held at freeze time")
      causes problems with NFS root filesystems.  The failures were noticed on
      OMAP2 and 3 boards during kernel init:
      
        [ BUG: swapper/0/1 still has locks held! ]
        3.9.0-rc3-00344-ga937536b #1 Not tainted
        -------------------------------------
        1 lock held by swapper/0/1:
         #0:  (&type->s_umount_key#13/1){+.+.+.}, at: [<c011e84c>] sget+0x248/0x574
      
        stack backtrace:
          rpc_wait_bit_killable
          __wait_on_bit
          out_of_line_wait_on_bit
          __rpc_execute
          rpc_run_task
          rpc_call_sync
          nfs_proc_get_root
          nfs_get_root
          nfs_fs_mount_common
          nfs_try_mount
          nfs_fs_mount
          mount_fs
          vfs_kern_mount
          do_mount
          sys_mount
          do_mount_root
          mount_root
          prepare_namespace
          kernel_init_freeable
          kernel_init
      
      Although the rootfs mounts, the system is unstable.  Here's a transcript
      from a PM test:
      
        http://www.pwsan.com/omap/testlogs/test_v3.9-rc3/20130317194234/pm/37xxevm/37xxevm_log.txt
      
      Here's what the test log should look like:
      
        http://www.pwsan.com/omap/testlogs/test_v3.8/20130218214403/pm/37xxevm/37xxevm_log.txt
      
      Mailing list discussion is here:
      
        http://lkml.org/lkml/2013/3/4/221
      
      Deal with this for v3.9 by reverting the problem commit, until folks can
      figure out the right long-term course of action.
      Signed-off-by: NPaul Walmsley <paul@pwsan.com>
      Cc: Mandeep Singh Baines <msb@chromium.org>
      Cc: Jeff Layton <jlayton@redhat.com>
      Cc: Shawn Guo <shawn.guo@linaro.org>
      Cc: <maciej.rutecki@gmail.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ben Chan <benchan@chromium.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Rafael J. Wysocki <rjw@sisk.pl>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dbf520a9
  8. 27 3月, 2013 2 次提交
    • E
      userns: Restrict when proc and sysfs can be mounted · 87a8ebd6
      Eric W. Biederman 提交于
      Only allow unprivileged mounts of proc and sysfs if they are already
      mounted when the user namespace is created.
      
      proc and sysfs are interesting because they have content that is
      per namespace, and so fresh mounts are needed when new namespaces
      are created while at the same time proc and sysfs have content that
      is shared between every instance.
      
      Respect the policy of who may see the shared content of proc and sysfs
      by only allowing new mounts if there was an existing mount at the time
      the user namespace was created.
      
      In practice there are only two interesting cases: proc and sysfs are
      mounted at their usual places, proc and sysfs are not mounted at all
      (some form of mount namespace jail).
      
      Cc: stable@vger.kernel.org
      Acked-by: NSerge Hallyn <serge.hallyn@canonical.com>
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      87a8ebd6
    • E
      userns: Don't allow creation if the user is chrooted · 3151527e
      Eric W. Biederman 提交于
      Guarantee that the policy of which files may be access that is
      established by setting the root directory will not be violated
      by user namespaces by verifying that the root directory points
      to the root of the mount namespace at the time of user namespace
      creation.
      
      Changing the root is a privileged operation, and as a matter of policy
      it serves to limit unprivileged processes to files below the current
      root directory.
      
      For reasons of simplicity and comprehensibility the privilege to
      change the root directory is gated solely on the CAP_SYS_CHROOT
      capability in the user namespace.  Therefore when creating a user
      namespace we must ensure that the policy of which files may be access
      can not be violated by changing the root directory.
      
      Anyone who runs a processes in a chroot and would like to use user
      namespace can setup the same view of filesystems with a mount
      namespace instead.  With this result that this is not a practical
      limitation for using user namespaces.
      
      Cc: stable@vger.kernel.org
      Acked-by: NSerge Hallyn <serge.hallyn@canonical.com>
      Reported-by: NAndy Lutomirski <luto@amacapital.net>
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      3151527e
  9. 26 3月, 2013 1 次提交
  10. 23 3月, 2013 2 次提交
    • O
      poweroff: change orderly_poweroff() to use schedule_work() · 2ca067ef
      Oleg Nesterov 提交于
      David said:
      
          Commit 6c0c0d4d ("poweroff: fix bug in orderly_poweroff()")
          apparently fixes one bug in orderly_poweroff(), but introduces
          another.  The comments on orderly_poweroff() claim it can be called
          from any context - and indeed we call it from interrupt context in
          arch/powerpc/platforms/pseries/ras.c for example.  But since that
          commit this is no longer safe, since call_usermodehelper_fns() is not
          safe in interrupt context without the UMH_NO_WAIT option.
      
      orderly_poweroff() can be used from any context but UMH_WAIT_EXEC is
      sleepable.  Move the "force" logic into __orderly_poweroff() and change
      orderly_poweroff() to use the global poweroff_work which simply calls
      __orderly_poweroff().
      
      While at it, remove the unneeded "int argc" and change argv_split() to
      use GFP_KERNEL.
      
      We use the global "bool poweroff_force" to pass the argument, this can
      obviously affect the previous request if it is pending/running.  So we
      only allow the "false => true" transition assuming that the pending
      "true" should succeed anyway.  If schedule_work() fails after that we
      know that work->func() was not called yet, it must see the new value.
      
      This means that orderly_poweroff() becomes async even if we do not run
      the command and always succeeds, schedule_work() can only fail if the
      work is already pending.  We can export __orderly_poweroff() and change
      the non-atomic callers which want the old semantics.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Reported-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Reported-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Cc: Lucas De Marchi <lucas.demarchi@profusion.mobi>
      Cc: Feng Hong <hongfeng@marvell.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Serge Hallyn <serge.hallyn@canonical.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2ca067ef
    • F
      printk: Provide a wake_up_klogd() off-case · dc72c32e
      Frederic Weisbecker 提交于
      wake_up_klogd() is useless when CONFIG_PRINTK=n because neither printk()
      nor printk_sched() are in use and there are actually no waiter on
      log_wait waitqueue.  It should be a stub in this case for users like
      bust_spinlocks().
      
      Otherwise this results in this warning when CONFIG_PRINTK=n and
      CONFIG_IRQ_WORK=n:
      
      	kernel/built-in.o In function `wake_up_klogd':
      	(.text.wake_up_klogd+0xb4): undefined reference to `irq_work_queue'
      
      To fix this, provide an off-case for wake_up_klogd() when
      CONFIG_PRINTK=n.
      
      There is much more from console_unlock() and other console related code
      in printk.c that should be moved under CONFIG_PRINTK.  But for now,
      focus on a minimal fix as we passed the merged window already.
      
      [akpm@linux-foundation.org: include printk.h in bust_spinlocks.c]
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Reported-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dc72c32e
  11. 21 3月, 2013 1 次提交
    • S
      perf: Fix ring_buffer perf_output_space() boundary calculation · dd9c086d
      Stephane Eranian 提交于
      This patch fixes a flaw in perf_output_space(). In case the size
      of the space needed is bigger than the actual buffer size, there
      may be situations where the function would return true (i.e.,
      there is space) when it should not. head > offset due to
      rounding of the masking logic.
      
      The problem can be tested by activating BTS on Intel processors.
      A BTS record can be as big as 16 pages. The following command
      fails:
      
        $ perf record -m 4 -c 1 -e branches:u my_test_program
      
      You will get a buffer corruption with this. Perf report won't be
      able to parse the perf.data.
      
      The fix is to first check that the requested space is smaller
      than the buffer size. If so, then the masking logic will work
      fine. If not, then there is no chance the record can be saved
      and it will be gracefully handled by upper code layers.
      
      [ In v2, we also make the logic for the writable more explicit by
        renaming it to rb->overwrite because it tells whether or not the
        buffer can overwrite its tail (suggested by PeterZ). ]
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: peterz@infradead.org
      Cc: jolsa@redhat.com
      Cc: fweisbec@gmail.com
      Link: http://lkml.kernel.org/r/20130318133327.GA3056@quadSigned-off-by: NIngo Molnar <mingo@kernel.org>
      dd9c086d
  12. 18 3月, 2013 2 次提交
    • N
      perf: Generate EXIT event only once per task context · d610d98b
      Namhyung Kim 提交于
      perf_event_task_event() iterates pmu list and generate events
      for each eligible pmu context.  But if task_event has task_ctx
      like in EXIT it'll generate events even though the pmu doesn't
      have an eligible one. Fix it by moving the code to proper
      places.
      
      Before this patch:
      
        $ perf record -n true
        [ perf record: Woken up 1 times to write data ]
        [ perf record: Captured and wrote 0.006 MB perf.data (~248 samples) ]
      
        $ perf report -D | tail
        Aggregated stats:
                   TOTAL events:         73
                    MMAP events:         67
                    COMM events:          2
                    EXIT events:          4
        cycles stats:
                   TOTAL events:         73
                    MMAP events:         67
                    COMM events:          2
                    EXIT events:          4
      
      After this patch:
      
        $ perf report -D | tail
        Aggregated stats:
                   TOTAL events:         70
                    MMAP events:         67
                    COMM events:          2
                    EXIT events:          1
        cycles stats:
                   TOTAL events:         70
                    MMAP events:         67
                    COMM events:          2
                    EXIT events:          1
      Signed-off-by: NNamhyung Kim <namhyung@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1363332433-7637-1-git-send-email-namhyung@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d610d98b
    • N
      perf: Reset hwc->last_period on sw clock events · 778141e3
      Namhyung Kim 提交于
      When cpu/task clock events are initialized, their sampling
      frequencies are converted to have a fixed value.  However it
      missed to update the hwc->last_period which was set to 1 for
      initial sampling frequency calibration.
      
      Because this hwc->last_period value is used as a period in
      perf_swevent_ hrtime(), every recorded sample will have an
      incorrected period of 1.
      
        $ perf record -e task-clock noploop 1
        [ perf record: Woken up 1 times to write data ]
        [ perf record: Captured and wrote 0.158 MB perf.data (~6919 samples) ]
      
        $ perf report -n --show-total-period  --stdio
        # Samples: 4K of event 'task-clock'
        # Event count (approx.): 4000
        #
        # Overhead       Samples        Period  Command  Shared Object              Symbol
        # ........  ............  ............  .......  .............  ..................
        #
            99.95%          3998          3998  noploop  noploop        [.] main
             0.03%             1             1  noploop  libc-2.15.so   [.] init_cacheinfo
             0.03%             1             1  noploop  ld-2.15.so     [.] open_verify
      
      Note that it doesn't affect the non-sampling event so that the
      perf stat still gets correct value with or without this patch.
      
        $ perf stat -e task-clock noploop 1
      
         Performance counter stats for 'noploop 1':
      
               1000.272525 task-clock                #    1.000 CPUs utilized
      
               1.000560605 seconds time elapsed
      Signed-off-by: NNamhyung Kim <namhyung@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1363574507-18808-1-git-send-email-namhyung@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      778141e3
  13. 15 3月, 2013 3 次提交
  14. 14 3月, 2013 5 次提交
    • T
      workqueue: convert to idr_alloc() · e68035fb
      Tejun Heo 提交于
      idr_get_new*() and friends are about to be deprecated.  Convert to the
      new idr_alloc() interface.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e68035fb
    • A
      kernel/signal.c: use __ARCH_HAS_SA_RESTORER instead of SA_RESTORER · 522cff14
      Andrew Morton 提交于
      __ARCH_HAS_SA_RESTORER is the preferred conditional for use in 3.9 and
      later kernels, per Kees.
      
      Cc: Emese Revfy <re.emese@gmail.com>
      Cc: Emese Revfy <re.emese@gmail.com>
      Cc: PaX Team <pageexec@freemail.hu>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Serge Hallyn <serge.hallyn@canonical.com>
      Cc: Julien Tinnes <jln@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      522cff14
    • K
      signal: always clear sa_restorer on execve · 2ca39528
      Kees Cook 提交于
      When the new signal handlers are set up, the location of sa_restorer is
      not cleared, leaking a parent process's address space location to
      children.  This allows for a potential bypass of the parent's ASLR by
      examining the sa_restorer value returned when calling sigaction().
      
      Based on what should be considered "secret" about addresses, it only
      matters across the exec not the fork (since the VMAs haven't changed
      until the exec).  But since exec sets SIG_DFL and keeps sa_restorer,
      this is where it should be fixed.
      
      Given the few uses of sa_restorer, a "set" function was not written
      since this would be the only use.  Instead, we use
      __ARCH_HAS_SA_RESTORER, as already done in other places.
      
      Example of the leak before applying this patch:
      
        $ cat /proc/$$/maps
        ...
        7fb9f3083000-7fb9f3238000 r-xp 00000000 fd:01 404469 .../libc-2.15.so
        ...
        $ ./leak
        ...
        7f278bc74000-7f278be29000 r-xp 00000000 fd:01 404469 .../libc-2.15.so
        ...
        1 0 (nil) 0x7fb9f30b94a0
        2 4000000 (nil) 0x7f278bcaa4a0
        3 4000000 (nil) 0x7f278bcaa4a0
        4 0 (nil) 0x7fb9f30b94a0
        ...
      
      [akpm@linux-foundation.org: use SA_RESTORER for backportability]
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Reported-by: NEmese Revfy <re.emese@gmail.com>
      Cc: Emese Revfy <re.emese@gmail.com>
      Cc: PaX Team <pageexec@freemail.hu>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Serge Hallyn <serge.hallyn@canonical.com>
      Cc: Julien Tinnes <jln@google.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2ca39528
    • E
      userns: Don't allow CLONE_NEWUSER | CLONE_FS · e66eded8
      Eric W. Biederman 提交于
      Don't allowing sharing the root directory with processes in a
      different user namespace.  There doesn't seem to be any point, and to
      allow it would require the overhead of putting a user namespace
      reference in fs_struct (for permission checks) and incrementing that
      reference count on practically every call to fork.
      
      So just perform the inexpensive test of forbidding sharing fs_struct
      acrosss processes in different user namespaces.  We already disallow
      other forms of threading when unsharing a user namespace so this
      should be no real burden in practice.
      
      This updates setns, clone, and unshare to disallow multiple user
      namespaces sharing an fs_struct.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e66eded8
    • S
      tracing: Fix free of probe entry by calling call_rcu_sched() · 740466bc
      Steven Rostedt (Red Hat) 提交于
      Because function tracing is very invasive, and can even trace
      calls to rcu_read_lock(), RCU access in function tracing is done
      with preempt_disable_notrace(). This requires a synchronize_sched()
      for updates and not a synchronize_rcu().
      
      Function probes (traceon, traceoff, etc) must be freed after
      a synchronize_sched() after its entry has been removed from the
      hash. But call_rcu() is used. Fix this by using call_rcu_sched().
      
      Also fix the usage to use hlist_del_rcu() instead of hlist_del().
      
      Cc: stable@vger.kernel.org
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      740466bc
  15. 13 3月, 2013 2 次提交
  16. 12 3月, 2013 1 次提交
    • S
      tracing: Fix race in snapshot swapping · 2721e72d
      Steven Rostedt (Red Hat) 提交于
      Although the swap is wrapped with a spin_lock, the assignment
      of the temp buffer used to swap is not within that lock.
      It needs to be moved into that lock, otherwise two swaps
      happening on two different CPUs, can end up using the wrong
      temp buffer to assign in the swap.
      
      Luckily, all current callers of the swap function appear to have
      their own locks. But in case something is added that allows two
      different callers to call the swap, then there's a chance that
      this race can trigger and corrupt the buffers.
      
      New code is coming soon that will allow for this race to trigger.
      
      I've Cc'd stable, so this bug will not show up if someone backports
      one of the changes that can trigger this bug.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      2721e72d
  17. 09 3月, 2013 2 次提交
    • L
      workqueue: fix possible pool stall bug in wq_unbind_fn() · eb283428
      Lai Jiangshan 提交于
      Since multiple pools per cpu have been introduced, wq_unbind_fn() has
      a subtle bug which may theoretically stall work item processing.  The
      problem is two-fold.
      
      * wq_unbind_fn() depends on the worker executing wq_unbind_fn() itself
        to start unbound chain execution, which works fine when there was
        only single pool.  With multiple pools, only the pool which is
        running wq_unbind_fn() - the highpri one - is guaranteed to have
        such kick-off.  The other pool could stall when its busy workers
        block.
      
      * The current code is setting WORKER_UNBIND / POOL_DISASSOCIATED of
        the two pools in succession without initiating work execution
        inbetween.  Because setting the flags requires grabbing assoc_mutex
        which is held while new workers are created, this could lead to
        stalls if a pool's manager is waiting for the previous pool's work
        items to release memory.  This is almost purely theoretical tho.
      
      Update wq_unbind_fn() such that it sets WORKER_UNBIND /
      POOL_DISASSOCIATED, goes over schedule() and explicitly kicks off
      execution for a pool and then moves on to the next one.
      
      tj: Updated comments and description.
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: stable@vger.kernel.org
      eb283428
    • A
      Revert parts of "hlist: drop the node parameter from iterators" · dc893e19
      Arnd Bergmann 提交于
      Commit b67bfe0d ("hlist: drop the node parameter from iterators")
      did a lot of nice changes but also contains two small hunks that seem to
      have slipped in accidentally and have no apparent connection to the
      intent of the patch.
      
      This reverts the two extraneous changes.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Peter Senna Tschudin <peter.senna@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dc893e19
  18. 08 3月, 2013 1 次提交
    • M
      clockevents: Don't allow dummy broadcast timers · a7dc19b8
      Mark Rutland 提交于
      Currently tick_check_broadcast_device doesn't reject clock_event_devices
      with CLOCK_EVT_FEAT_DUMMY, and may select them in preference to real
      hardware if they have a higher rating value. In this situation, the
      dummy timer is responsible for broadcasting to itself, and the core
      clockevents code may attempt to call non-existent callbacks for
      programming the dummy, eventually leading to a panic.
      
      This patch makes tick_check_broadcast_device always reject dummy timers,
      preventing this problem.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: Jon Medhurst (Tixy) <tixy@linaro.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      a7dc19b8
  19. 07 3月, 2013 2 次提交
    • S
      tracing: Do not return EINVAL in snapshot when not allocated · c9960e48
      Steven Rostedt (Red Hat) 提交于
      To use the tracing snapshot feature, writing a '1' into the snapshot
      file causes the snapshot buffer to be allocated if it has not already
      been allocated and dose a 'swap' with the main buffer, so that the
      snapshot now contains what was in the main buffer, and the main buffer
      now writes to what was the snapshot buffer.
      
      To free the snapshot buffer, a '0' is written into the snapshot file.
      
      To clear the snapshot buffer, any number but a '0' or '1' is written
      into the snapshot file. But if the file is not allocated it returns
      -EINVAL error code. This is rather pointless. It is better just to
      do nothing and return success.
      Acked-by: NHiraku Toyooka <hiraku.toyooka.gu@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c9960e48
    • S
      tracing: Add help of snapshot feature when snapshot is empty · d8741e2e
      Steven Rostedt (Red Hat) 提交于
      When cat'ing the snapshot file, instead of showing an empty trace
      header like the trace file does, show how to use the snapshot
      feature.
      
      Also, this is a good place to show if the snapshot has been allocated
      or not. Users may want to "pre allocate" the snapshot to have a fast
      "swap" of the current buffer. Otherwise, a swap would be slow and might
      fail as it would need to allocate the snapshot buffer, and that might
      fail under tight memory constraints.
      
      Here's what it looked like before:
      
       # tracer: nop
       #
       # entries-in-buffer/entries-written: 0/0   #P:4
       #
       #                              _-----=> irqs-off
       #                             / _----=> need-resched
       #                            | / _---=> hardirq/softirq
       #                            || / _--=> preempt-depth
       #                            ||| /     delay
       #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
       #              | |       |   ||||       |         |
      
      Here's what it looks like now:
      
       # tracer: nop
       #
       #
       # * Snapshot is freed *
       #
       # Snapshot commands:
       # echo 0 > snapshot : Clears and frees snapshot buffer
       # echo 1 > snapshot : Allocates snapshot buffer, if not already allocated.
       #                      Takes a snapshot of the main buffer.
       # echo 2 > snapshot : Clears snapshot buffer (but does not allocate)
       #                      (Doesn't have to be '2' works with any number that
       #                       is not a '0' or '1')
      Acked-by: NHiraku Toyooka <hiraku.toyooka.gu@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d8741e2e
  20. 03 3月, 2013 2 次提交
    • A
      fix compat_sys_rt_sigprocmask() · db61ec29
      Al Viro 提交于
      Converting bitmask to 32bit granularity is fine, but we'd better
      _do_ something with the result.  Such as "copy it to userland"...
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      db61ec29
    • J
      trace/ring_buffer: handle 64bit aligned structs · 649508f6
      James Hogan 提交于
      Some 32 bit architectures require 64 bit values to be aligned (for
      example Meta which has 64 bit read/write instructions). These require 8
      byte alignment of event data too, so use
      !CONFIG_HAVE_64BIT_ALIGNED_ACCESS instead of !CONFIG_64BIT ||
      CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS to decide alignment, and align
      buffer_data_page::data accordingly.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Acked-by: Steven Rostedt <rostedt@goodmis.org> (previous version subtly different)
      649508f6
  21. 02 3月, 2013 1 次提交