1. 04 3月, 2014 5 次提交
    • H
      memcg: fix endless loop in __mem_cgroup_iter_next() · ce48225f
      Hugh Dickins 提交于
      Commit 0eef6156 ("memcg: fix css reference leak and endless loop in
      mem_cgroup_iter") got the interaction with the commit a few before it
      d8ad3055 ("mm/memcg: iteration skip memcgs not yet fully
      initialized") slightly wrong, and we didn't notice at the time.
      
      It's elusive, and harder to get than the original, but for a couple of
      days before rc1, I several times saw a endless loop similar to that
      supposedly being fixed.
      
      This time it was a tighter loop in __mem_cgroup_iter_next(): because we
      can get here when our root has already been offlined, and the ordering
      of conditions was such that we then just cycled around forever.
      
      Fixes: 0eef6156 ("memcg: fix css reference leak and endless loop in mem_cgroup_iter").
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Acked-by: NMichal Hocko <mhocko@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: <stable@vger.kernel.org>	[3.12+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ce48225f
    • H
      lib/radix-tree.c: swapoff tmpfs radix_tree: remember to rcu_read_unlock · 5f30fc94
      Hugh Dickins 提交于
      Running fsx on tmpfs with concurrent memhog-swapoff-swapon, lots of
      
        BUG: sleeping function called from invalid context at kernel/fork.c:606
        in_atomic(): 0, irqs_disabled(): 0, pid: 1394, name: swapoff
        1 lock held by swapoff/1394:
         #0:  (rcu_read_lock){.+.+.+}, at: [<ffffffff812520a1>] radix_tree_locate_item+0x1f/0x2b6
      
      followed by
      
        ================================================
        [ BUG: lock held when returning to user space! ]
        3.14.0-rc1 #3 Not tainted
        ------------------------------------------------
        swapoff/1394 is leaving the kernel with locks still held!
        1 lock held by swapoff/1394:
         #0:  (rcu_read_lock){.+.+.+}, at: [<ffffffff812520a1>] radix_tree_locate_item+0x1f/0x2b6
      
      after which the system recovered nicely.
      
      Whoops, I long ago forgot the rcu_read_unlock() on one unlikely branch.
      
      Fixes e504f3fd ("tmpfs radix_tree: locate_item to speed up swapoff")
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5f30fc94
    • D
      dma debug: account for cachelines and read-only mappings in overlap tracking · 3b7a6418
      Dan Williams 提交于
      While debug_dma_assert_idle() checks if a given *page* is actively
      undergoing dma the valid granularity of a dma mapping is a *cacheline*.
      Sander's testing shows that the warning message "DMA-API: exceeded 7
      overlapping mappings of pfn..." is falsely triggering.  The test is
      simply mapping multiple cachelines in a given page.
      
      Ultimately we want overlap tracking to be valid as it is a real api
      violation, so we need to track active mappings by cachelines.  Update
      the active dma tracking to use the page-frame-relative cacheline of the
      mapping as the key, and update debug_dma_assert_idle() to check for all
      possible mapped cachelines for a given page.
      
      However, the need to track active mappings is only relevant when the
      dma-mapping is writable by the device.  In fact it is fairly standard
      for read-only mappings to have hundreds or thousands of overlapping
      mappings at once.  Limiting the overlap tracking to writable
      (!DMA_TO_DEVICE) eliminates this class of false-positive overlap
      reports.
      
      Note, the radix gang lookup is sub-optimal.  It would be best if it
      stopped fetching entries once the search passed a page boundary.
      Nevertheless, this implementation does not perturb the original net_dma
      failing case.  That is to say the extra overhead does not show up in
      terms of making the failing case pass due to a timing change.
      
      References:
        http://marc.info/?l=linux-netdev&m=139232263419315&w=2
        http://marc.info/?l=linux-netdev&m=139217088107122&w=2Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Reported-by: NSander Eikelenboom <linux@eikelenboom.it>
      Reported-by: NDave Jones <davej@redhat.com>
      Tested-by: NDave Jones <davej@redhat.com>
      Tested-by: NSander Eikelenboom <linux@eikelenboom.it>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Francois Romieu <romieu@fr.zoreil.com>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Wei Liu <wei.liu2@citrix.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3b7a6418
    • D
      mm: close PageTail race · 668f9abb
      David Rientjes 提交于
      Commit bf6bddf1 ("mm: introduce compaction and migration for
      ballooned pages") introduces page_count(page) into memory compaction
      which dereferences page->first_page if PageTail(page).
      
      This results in a very rare NULL pointer dereference on the
      aforementioned page_count(page).  Indeed, anything that does
      compound_head(), including page_count() is susceptible to racing with
      prep_compound_page() and seeing a NULL or dangling page->first_page
      pointer.
      
      This patch uses Andrea's implementation of compound_trans_head() that
      deals with such a race and makes it the default compound_head()
      implementation.  This includes a read memory barrier that ensures that
      if PageTail(head) is true that we return a head page that is neither
      NULL nor dangling.  The patch then adds a store memory barrier to
      prep_compound_page() to ensure page->first_page is set.
      
      This is the safest way to ensure we see the head page that we are
      expecting, PageTail(page) is already in the unlikely() path and the
      memory barriers are unfortunately required.
      
      Hugetlbfs is the exception, we don't enforce a store memory barrier
      during init since no race is possible.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Holger Kiehl <Holger.Kiehl@dwd.de>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Rafael Aquini <aquini@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      668f9abb
    • B
      MAINTAINERS: EDAC: add Mauro and Borislav as interim patch collectors · aa15aa0e
      Borislav Petkov 提交于
      We're more or less collecting EDAC patches already anyway so let's hold it
      down so that get_maintainer sees it too.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NMauro Carvalho Chehab <m.chehab@samsung.com>
      Cc: Doug Thompson <dougthompson@xmission.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      aa15aa0e
  2. 03 3月, 2014 8 次提交
  3. 02 3月, 2014 8 次提交
  4. 01 3月, 2014 10 次提交
  5. 28 2月, 2014 9 次提交
    • S
      arm64: mm: Add double logical invert to pte accessors · 84fe6826
      Steve Capper 提交于
      Page table entries on ARM64 are 64 bits, and some pte functions such as
      pte_dirty return a bitwise-and of a flag with the pte value. If the
      flag to be tested resides in the upper 32 bits of the pte, then we run
      into the danger of the result being dropped if downcast.
      
      For example:
      	gather_stats(page, md, pte_dirty(*pte), 1);
      where pte_dirty(*pte) is downcast to an int.
      
      This patch adds a double logical invert to all the pte_ accessors to
      ensure predictable downcasting.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      84fe6826
    • H
      dm cache: fix truncation bug when mapping I/O to >2TB fast device · e0d849fa
      Heinz Mauelshagen 提交于
      When remapping a block to the cache's fast device that is larger than
      2TB we must not truncate the destination sector to 32bits.  The 32bit
      temporary result of from_cblock() was being overflowed in
      remap_to_cache() due to the logical left shift.
      
      Use an intermediate 64bit type to store the 32bit from_cblock() result
      to fix the overflow.
      Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      e0d849fa
    • J
      perf tools: Fix strict alias issue for find_first_bit · b39c2a57
      Jiri Olsa 提交于
      When compiling perf tool code with gcc 4.4.7 I'm getting
      following error:
      
          CC       util/session.o
        cc1: warnings being treated as errors
        util/session.c: In function ‘perf_session_deliver_event’:
        tools/perf/util/include/linux/bitops.h:109: error: dereferencing pointer ‘p’ does break strict-aliasing rules
        tools/perf/util/include/linux/bitops.h:101: error: dereferencing pointer ‘p’ does break strict-aliasing rules
        util/session.c:697: note: initialized from here
        tools/perf/util/include/linux/bitops.h:101: note: initialized from here
        make[1]: *** [util/session.o] Error 1
        make: *** [util/session.o] Error 2
      
      The aliased types here are u64 and unsigned long pointers, which is safe
      for the find_first_bit processing.
      
      This error shows up for me only for gcc 4.4 on 32bit x86, even for
      -Wstrict-aliasing=3, while newer gcc are quiet and scream here for
      -Wstrict-aliasing={2,1}. Looks like newer gcc changed the rules for
      strict alias warnings.
      
      The gcc documentation offers workaround for valid aliasing by using
      __may_alias__ attribute:
      
        http://gcc.gnu.org/onlinedocs/gcc-4.4.0/gcc/Type-Attributes.html
      
      Using this workaround for the find_first_bit function.
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1393434867-20271-1-git-send-email-jolsa@redhat.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      b39c2a57
    • B
      powerpc/powernv: Fix indirect XSCOM unmangling · e0cf9576
      Benjamin Herrenschmidt 提交于
      We need to unmangle the full address, not just the register
      number, and we also need to support the real indirect bit
      being set for in-kernel uses.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      CC: <stable@vger.kernel.org> [v3.13]
      e0cf9576
    • B
      powerpc/powernv: Fix opal_xscom_{read,write} prototype · 2f3f38e4
      Benjamin Herrenschmidt 提交于
      The OPAL firmware functions opal_xscom_read and opal_xscom_write
      take a 64-bit argument for the XSCOM (PCB) address in order to
      support the indirect mode on P8.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      CC: <stable@vger.kernel.org> [v3.13]
      2f3f38e4
    • G
      powerpc/powernv: Refactor PHB diag-data dump · af87d2fe
      Gavin Shan 提交于
      As Ben suggested, the patch prints PHB diag-data with multiple
      fields in one line and omits the line if the fields of that
      line are all zero.
      
      With the patch applied, the PHB3 diag-data dump looks like:
      
      PHB3 PHB#3 Diag-data (Version: 1)
      
        brdgCtl:     00000002
        RootSts:     0000000f 00400000 b0830008 00100147 00002000
        nFir:        0000000000000000 0030006e00000000 0000000000000000
        PhbSts:      0000001c00000000 0000000000000000
        Lem:         0000000000100000 42498e327f502eae 0000000000000000
        InAErr:      8000000000000000 8000000000000000 0402030000000000 0000000000000000
        PE[  8] A/B: 8480002b00000000 8000000000000000
      
      [ The current diag data is so big that it overflows the printk
        buffer pretty quickly in cases when we get a handful of errors
        at once which can happen. --BenH
      ]
      Signed-off-by: NGavin Shan <shangw@linux.vnet.ibm.com>
      CC: <stable@vger.kernel.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      af87d2fe
    • G
      powerpc/powernv: Dump PHB diag-data immediately · 94716604
      Gavin Shan 提交于
      The PHB diag-data is important to help locating the root cause for
      EEH errors such as frozen PE or fenced PHB. However, the EEH core
      enables IO path by clearing part of HW registers before collecting
      this data causing it to be corrupted.
      
      This patch fixes this by dumping the PHB diag-data immediately when
      frozen/fenced state on PE or PHB is detected for the first time in
      eeh_ops::get_state() or next_error() backend.
      Signed-off-by: NGavin Shan <shangw@linux.vnet.ibm.com>
      CC: <stable@vger.kernel.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      94716604
    • P
      powerpc: Increase stack redzone for 64-bit userspace to 512 bytes · 573ebfa6
      Paul Mackerras 提交于
      The new ELFv2 little-endian ABI increases the stack redzone -- the
      area below the stack pointer that can be used for storing data --
      from 288 bytes to 512 bytes.  This means that we need to allow more
      space on the user stack when delivering a signal to a 64-bit process.
      
      To make the code a bit clearer, we define new USER_REDZONE_SIZE and
      KERNEL_REDZONE_SIZE symbols in ptrace.h.  For now, we leave the
      kernel redzone size at 288 bytes, since increasing it to 512 bytes
      would increase the size of interrupt stack frames correspondingly.
      
      Gcc currently only makes use of 288 bytes of redzone even when
      compiling for the new little-endian ABI, and the kernel cannot
      currently be compiled with the new ABI anyway.
      
      In the future, hopefully gcc will provide an option to control the
      amount of redzone used, and then we could reduce it even more.
      
      This also changes the code in arch_compat_alloc_user_space() to
      preserve the expanded redzone.  It is not clear why this function would
      ever be used on a 64-bit process, though.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      CC: <stable@vger.kernel.org> [v3.13]
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      573ebfa6
    • L
      powerpc/ftrace: bugfix for test_24bit_addr · a95fc585
      Liu Ping Fan 提交于
      The branch target should be the func addr, not the addr of func_descr_t.
      So using ppc_function_entry() to generate the right target addr.
      Signed-off-by: NLiu Ping Fan <pingfank@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a95fc585