1. 16 12月, 2013 1 次提交
  2. 20 11月, 2013 3 次提交
  3. 16 11月, 2013 2 次提交
  4. 15 11月, 2013 22 次提交
  5. 14 11月, 2013 2 次提交
  6. 13 11月, 2013 10 次提交
    • T
      genirq: Prevent spurious detection for unconditionally polled interrupts · b39898cd
      Thomas Gleixner 提交于
      On a 68k platform a couple of interrupts are demultiplexed and
      "polled" from a top level interrupt. Unfortunately there is no way to
      determine which of the sub interrupts raised the top level interrupt,
      so all of the demultiplexed interrupt handlers need to be
      invoked. Given a high enough frequency this can trigger the spurious
      interrupt detection mechanism, if one of the demultiplex interrupts
      returns IRQ_NONE continuously. But this is a false positive as the
      polling causes this behaviour and not buggy hardware/software.
      
      Introduce IRQ_POLLED which can be set at interrupt chip setup time via
      irq_set_status_flags(). The flag excludes the interrupt from the
      spurious detector and from all core polling activities.
      Reported-and-tested-by: NMichael Schmitz <schmitzmic@gmail.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: linux-m68k@vger.kernel.org
      Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1311061149250.23353@ionos.tec.linutronix.de
      b39898cd
    • M
      kallsyms: Revert back to 128 max symbol length · 480f439c
      Michal Marek 提交于
      This reverts commits
      f3462aa9 (Kbuild: Handle longer symbols in kallsyms.c) and
      eea0e9cb (kbuild: Increase kallsyms max symbol length)
      except for the added overflow check. The reason is a regression caused
      by increasing the buffer:
      http://marc.info/?l=linux-kernel&m=138387700415675.
      Reported-by: NFengguang Wu <fengguang.wu@intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Joe Mario <jmario@redhat.com>
      Signed-off-by: NMichal Marek <mmarek@suse.cz>
      480f439c
    • M
      ipc, msg: fix message length check for negative values · 4e9b45a1
      Mathias Krause 提交于
      On 64 bit systems the test for negative message sizes is bogus as the
      size, which may be positive when evaluated as a long, will get truncated
      to an int when passed to load_msg().  So a long might very well contain a
      positive value but when truncated to an int it would become negative.
      
      That in combination with a small negative value of msg_ctlmax (which will
      be promoted to an unsigned type for the comparison against msgsz, making
      it a big positive value and therefore make it pass the check) will lead to
      two problems: 1/ The kmalloc() call in alloc_msg() will allocate a too
      small buffer as the addition of alen is effectively a subtraction.  2/ The
      copy_from_user() call in load_msg() will first overflow the buffer with
      userland data and then, when the userland access generates an access
      violation, the fixup handler copy_user_handle_tail() will try to fill the
      remainder with zeros -- roughly 4GB.  That almost instantly results in a
      system crash or reset.
      
        ,-[ Reproducer (needs to be run as root) ]--
        | #include <sys/stat.h>
        | #include <sys/msg.h>
        | #include <unistd.h>
        | #include <fcntl.h>
        |
        | int main(void) {
        |     long msg = 1;
        |     int fd;
        |
        |     fd = open("/proc/sys/kernel/msgmax", O_WRONLY);
        |     write(fd, "-1", 2);
        |     close(fd);
        |
        |     msgsnd(0, &msg, 0xfffffff0, IPC_NOWAIT);
        |
        |     return 0;
        | }
        '---
      
      Fix the issue by preventing msgsz from getting truncated by consistently
      using size_t for the message length.  This way the size checks in
      do_msgsnd() could still be passed with a negative value for msg_ctlmax but
      we would fail on the buffer allocation in that case and error out.
      
      Also change the type of m_ts from int to size_t to avoid similar nastiness
      in other code paths -- it is used in similar constructs, i.e.  signed vs.
      unsigned checks.  It should never become negative under normal
      circumstances, though.
      
      Setting msg_ctlmax to a negative value is an odd configuration and should
      be prevented.  As that might break existing userland, it will be handled
      in a separate commit so it could easily be reverted and reworked without
      reintroducing the above described bug.
      
      Hardening mechanisms for user copy operations would have catched that bug
      early -- e.g.  checking slab object sizes on user copy operations as the
      usercopy feature of the PaX patch does.  Or, for that matter, detect the
      long vs.  int sign change due to truncation, as the size overflow plugin
      of the very same patch does.
      
      [akpm@linux-foundation.org: fix i386 min() warnings]
      Signed-off-by: NMathias Krause <minipli@googlemail.com>
      Cc: Pax Team <pageexec@freemail.hu>
      Cc: Davidlohr Bueso <davidlohr@hp.com>
      Cc: Brad Spengler <spender@grsecurity.net>
      Cc: Manfred Spraul <manfred@colorfullife.com>
      Cc: <stable@vger.kernel.org>	[ v2.3.27+ -- yes, that old ;) ]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4e9b45a1
    • J
      rbtree: fix rbtree_postorder_for_each_entry_safe() iterator · 1310a5a9
      Jan Kara 提交于
      The iterator rbtree_postorder_for_each_entry_safe() relies on pointer
      underflow behavior when testing for loop termination.  In particular it
      expects that
      
        &rb_entry(NULL, type, field)->field
      
      is NULL.  But the result of this expression is not defined by a C standard
      and some gcc versions (e.g.  4.3.4) assume the above expression can never
      be equal to NULL.  The net result is an oops because the iteration is not
      properly terminated.
      
      Fix the problem by modifying the iterator to avoid pointer underflows.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NCody P Schafer <cody@linux.vnet.ibm.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Artem Bityutskiy <dedekind1@gmail.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
      Cc: Pablo Neira Ayuso <pablo@netfilter.org>
      Cc: Patrick McHardy <kaber@trash.net>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: <stable@vger.kernel.org>		[3.12.x]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1310a5a9
    • K
      exec/ptrace: fix get_dumpable() incorrect tests · d049f74f
      Kees Cook 提交于
      The get_dumpable() return value is not boolean.  Most users of the
      function actually want to be testing for non-SUID_DUMP_USER(1) rather than
      SUID_DUMP_DISABLE(0).  The SUID_DUMP_ROOT(2) is also considered a
      protected state.  Almost all places did this correctly, excepting the two
      places fixed in this patch.
      
      Wrong logic:
          if (dumpable == SUID_DUMP_DISABLE) { /* be protective */ }
              or
          if (dumpable == 0) { /* be protective */ }
              or
          if (!dumpable) { /* be protective */ }
      
      Correct logic:
          if (dumpable != SUID_DUMP_USER) { /* be protective */ }
              or
          if (dumpable != 1) { /* be protective */ }
      
      Without this patch, if the system had set the sysctl fs/suid_dumpable=2, a
      user was able to ptrace attach to processes that had dropped privileges to
      that user.  (This may have been partially mitigated if Yama was enabled.)
      
      The macros have been moved into the file that declares get/set_dumpable(),
      which means things like the ia64 code can see them too.
      
      CVE-2013-2929
      Reported-by: NVasily Kulikov <segoon@openwall.com>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d049f74f
    • S
      rtc: s5m-rtc: add real-time clock driver for s5m8767 · 5bccae6e
      Sangbeom Kim 提交于
      Add real-time clock driver for s5m8767.
      Signed-off-by: NSangbeom Kim <sbkim73@samsung.com>
      Signed-off-by: NSachin Kamat <sachin.kamat@linaro.org>
      Cc: Todd Broch <tbroch@chromium.org>
      Cc: Mark Brown <broonie@kernel.org>
      Acked-by: Lee Jones <lee.jones@linaro.org>	[mfd parts]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5bccae6e
    • G
      init.h: document the existence of __initconst · 65321547
      Geert Uytterhoeven 提交于
      Initdata can be const since more than 5 years, using the __initconst
      keyword.
      Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      65321547
    • O
      list: introduce list_last_entry(), use list_{first,last}_entry() · 93be3c2e
      Oleg Nesterov 提交于
      We already have list_first_entry(), it makes sense to also add
      list_last_entry() for consistency.  And we use both helpers in
      list_for_each_*().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Eilon Greenstein <eilong@broadcom.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      93be3c2e
    • O
      list: change list_for_each_entry*() to use list_*_entry() · 8120e2e5
      Oleg Nesterov 提交于
      Now that we have list_{next,prev}_entry() we can change
      list_for_each_entry*() and list_safe_reset_next() to use the new helpers
      to improve the readability.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Eilon Greenstein <eilong@broadcom.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8120e2e5
    • O
      list: introduce list_next_entry() and list_prev_entry() · 008208c6
      Oleg Nesterov 提交于
      Add two trivial helpers list_next_entry() and list_prev_entry(), they
      can have a lot of users including list.h itself.  In fact the 1st one is
      already defined in events/core.c and bnx2x_sp.c, so the patch simply
      moves the definition to list.h.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Eilon Greenstein <eilong@broadcom.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      008208c6