1. 22 8月, 2013 5 次提交
  2. 17 8月, 2013 1 次提交
    • G
      s390: Fix broken build · 215b28a5
      Guenter Roeck 提交于
      Fix this build error:
      
        In file included from fs/exec.c:61:0:
        arch/s390/include/asm/tlb.h:35:23: error: expected identifier or '(' before 'unsigned'
        arch/s390/include/asm/tlb.h:36:1: warning: no semicolon at end of struct or union [enabled by default]
        arch/s390/include/asm/tlb.h: In function 'tlb_gather_mmu':
        arch/s390/include/asm/tlb.h:57:5: error: 'struct mmu_gather' has no member named 'end'
      
      Broken due to commit 2b047252 ("Fix TLB gather virtual address range
      invalidation corner cases").
      
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: NGuenter Roeck <linux@roeck-us.net>
      [ Oh well. We had build testing for ppc amd um, but no s390  - Linus ]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      215b28a5
  3. 16 8月, 2013 1 次提交
    • L
      Fix TLB gather virtual address range invalidation corner cases · 2b047252
      Linus Torvalds 提交于
      Ben Tebulin reported:
      
       "Since v3.7.2 on two independent machines a very specific Git
        repository fails in 9/10 cases on git-fsck due to an SHA1/memory
        failures.  This only occurs on a very specific repository and can be
        reproduced stably on two independent laptops.  Git mailing list ran
        out of ideas and for me this looks like some very exotic kernel issue"
      
      and bisected the failure to the backport of commit 53a59fc6 ("mm:
      limit mmu_gather batching to fix soft lockups on !CONFIG_PREEMPT").
      
      That commit itself is not actually buggy, but what it does is to make it
      much more likely to hit the partial TLB invalidation case, since it
      introduces a new case in tlb_next_batch() that previously only ever
      happened when running out of memory.
      
      The real bug is that the TLB gather virtual memory range setup is subtly
      buggered.  It was introduced in commit 597e1c35 ("mm/mmu_gather:
      enable tlb flush range in generic mmu_gather"), and the range handling
      was already fixed at least once in commit e6c495a9 ("mm: fix the TLB
      range flushed when __tlb_remove_page() runs out of slots"), but that fix
      was not complete.
      
      The problem with the TLB gather virtual address range is that it isn't
      set up by the initial tlb_gather_mmu() initialization (which didn't get
      the TLB range information), but it is set up ad-hoc later by the
      functions that actually flush the TLB.  And so any such case that forgot
      to update the TLB range entries would potentially miss TLB invalidates.
      
      Rather than try to figure out exactly which particular ad-hoc range
      setup was missing (I personally suspect it's the hugetlb case in
      zap_huge_pmd(), which didn't have the same logic as zap_pte_range()
      did), this patch just gets rid of the problem at the source: make the
      TLB range information available to tlb_gather_mmu(), and initialize it
      when initializing all the other tlb gather fields.
      
      This makes the patch larger, but conceptually much simpler.  And the end
      result is much more understandable; even if you want to play games with
      partial ranges when invalidating the TLB contents in chunks, now the
      range information is always there, and anybody who doesn't want to
      bother with it won't introduce subtle bugs.
      
      Ben verified that this fixes his problem.
      Reported-bisected-and-tested-by: NBen Tebulin <tebulin@googlemail.com>
      Build-testing-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Build-testing-by: NRichard Weinberger <richard.weinberger@gmail.com>
      Reviewed-by: NMichal Hocko <mhocko@suse.cz>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2b047252
  4. 26 7月, 2013 1 次提交
    • M
      s390/bitops: fix find_next_bit_left · 3b0040a4
      Martin Schwidefsky 提交于
      The find_next_bit_left function is broken if used with an offset which
      is not a multiple of 64. The shift to mask the bits of a 64-bit word
      not to search is in the wrong direction, the result can be either a
      bit found smaller than the offset or failure to find a set bit.
      
      Cc: <stable@vger.kernel.org> # v3.8+
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      3b0040a4
  5. 16 7月, 2013 1 次提交
  6. 29 6月, 2013 1 次提交
  7. 27 6月, 2013 8 次提交
  8. 21 6月, 2013 1 次提交
  9. 20 6月, 2013 1 次提交
  10. 19 6月, 2013 2 次提交
  11. 17 6月, 2013 4 次提交
  12. 05 6月, 2013 3 次提交
  13. 28 5月, 2013 1 次提交
  14. 22 5月, 2013 2 次提交
  15. 21 5月, 2013 5 次提交
  16. 17 5月, 2013 1 次提交
  17. 15 5月, 2013 1 次提交
    • H
      s390/ftrace: fix mcount adjustment · aca91209
      Heiko Carstens 提交于
      Tony Jones reported that the ftrace self tests on s390 do not work:
      
      <6>Testing dynamic ftrace ops #1: (0 0 0 0 0) FAILED!
      <6>Testing tracer irqsoff:
      <3>failed to start irqsoff tracer
      <4>.. no entries found ..FAILED!
      <6>Testing tracer wakeup:
      <3>failed to start wakeup tracer
      <4>.. no entries found ..FAILED!
      <6>Testing tracer function_graph:
      <4>Failed to init function_graph tracer, init returned -19
      <4>FAILED!
      
      This happens because we forgot to adjust the instruction pointer that gets
      passed to the ftrace trace function by MCOUNT_INSN_SIZE.
      
      In addition change MCOUNT_INSN_SIZE to the correct value on 31 bit.
      It only worked so far because the to be patched instruction was identical.
      Reported-by: NTony Jones <tonyj@suse.com>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      aca91209
  18. 07 5月, 2013 1 次提交