1. 04 6月, 2009 1 次提交
  2. 02 6月, 2009 1 次提交
  3. 30 5月, 2009 2 次提交
  4. 29 5月, 2009 5 次提交
    • M
      x86: ignore VM_LOCKED when determining if hugetlb-backed page tables can be shared or not · 32b154c0
      Mel Gorman 提交于
      Addresses http://bugzilla.kernel.org/show_bug.cgi?id=13302
      
      On x86 and x86-64, it is possible that page tables are shared beween
      shared mappings backed by hugetlbfs.  As part of this,
      page_table_shareable() checks a pair of vma->vm_flags and they must match
      if they are to be shared.  All VMA flags are taken into account, including
      VM_LOCKED.
      
      The problem is that VM_LOCKED is cleared on fork().  When a process with a
      shared memory segment forks() to exec() a helper, there will be shared
      VMAs with different flags.  The impact is that the shared segment is
      sometimes considered shareable and other times not, depending on what
      process is checking.
      
      What happens is that the segment page tables are being shared but the
      count is inaccurate depending on the ordering of events.  As the page
      tables are freed with put_page(), bad pmd's are found when some of the
      children exit.  The hugepage counters also get corrupted and the Total and
      Free count will no longer match even when all the hugepage-backed regions
      are freed.  This requires a reboot of the machine to "fix".
      
      This patch addresses the problem by comparing all flags except VM_LOCKED
      when deciding if pagetables should be shared or not for hugetlbfs-backed
      mapping.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: <stable@kernel.org>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: <starlight@binnacle.cx>
      Cc: Eric B Munson <ebmunson@us.ibm.com>
      Cc: Adam Litke <agl@us.ibm.com>
      Cc: Andy Whitcroft <apw@canonical.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      32b154c0
    • O
      flat: fix data sections alignment · c3dc5bec
      Oskar Schirmer 提交于
      The flat loader uses an architecture's flat_stack_align() to align the
      stack but assumes word-alignment is enough for the data sections.
      
      However, on the Xtensa S6000 we have registers up to 128bit width
      which can be used from userspace and therefor need userspace stack and
      data-section alignment of at least this size.
      
      This patch drops flat_stack_align() and uses the same alignment that
      is required for slab caches, ARCH_SLAB_MINALIGN, or wordsize if it's
      not defined by the architecture.
      
      It also fixes m32r which was obviously kaput, aligning an
      uninitialized stack entry instead of the stack pointer.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NOskar Schirmer <os@emlix.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Bryan Wu <cooloney@kernel.org>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Acked-by: NPaul Mundt <lethal@linux-sh.org>
      Cc: Greg Ungerer <gerg@uclinux.org>
      Signed-off-by: NJohannes Weiner <jw@emlix.com>
      Acked-by: NMike Frysinger <vapier.adi@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c3dc5bec
    • R
      [ARM] update mach-types · 6daad5c6
      Russell King 提交于
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      6daad5c6
    • M
      [ARM] Add cmpxchg support for ARMv6+ systems (v5) · ecd322c9
      Mathieu Desnoyers 提交于
      Add cmpxchg/cmpxchg64 support for ARMv6K and ARMv7 systems
      (original patch from Catalin Marinas <catalin.marinas@arm.com>)
      
      The cmpxchg and cmpxchg64 functions can be implemented using the
      LDREX*/STREX* instructions. Since operand lengths other than 32bit are
      required, the full implementations are only available if the ARMv6K
      extensions are present (for the LDREXB, LDREXH and LDREXD instructions).
      
      For ARMv6, only 32-bits cmpxchg is available.
      
      Mathieu :
      
      Make cmpxchg_local always available with best implementation for all type sizes (1, 2, 4 bytes).
      Make cmpxchg64_local always available.
      
      Use "Ir" constraint for "old" operand, like atomic.h atomic_cmpxchg does.
      
      Change since v3 :
      - Add "memory" clobbers (thanks to Nicolas Pitre)
      - removed __asmeq(), only needed for old compilers, very unlikely on ARMv6+.
      
      Note : ARMv7-M should eventually be ifdefed-out of cmpxchg64. But it's not
      supported by the Linux kernel currently.
      
      Put back arm < v6 cmpxchg support.
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      CC: Catalin Marinas <catalin.marinas@arm.com>
      CC: Nicolas Pitre <nico@cam.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ecd322c9
    • R
      [ARM] barriers: improve xchg, bitops and atomic SMP barriers · bac4e960
      Russell King 提交于
      Mathieu Desnoyers pointed out that the ARM barriers were lacking:
      
      - cmpxchg, xchg and atomic add return need memory barriers on
        architectures which can reorder the relative order in which memory
        read/writes can be seen between CPUs, which seems to include recent
        ARM architectures. Those barriers are currently missing on ARM.
      
      - test_and_xxx_bit were missing SMP barriers.
      
      So put these barriers in.  Provide separate atomic_add/atomic_sub
      operations which do not require barriers.
      Reported-Reviewed-and-Acked-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      bac4e960
  5. 28 5月, 2009 1 次提交
  6. 27 5月, 2009 13 次提交
  7. 26 5月, 2009 4 次提交
    • T
      x86, relocs: ignore R_386_NONE in kernel relocation entries · 46176b4f
      Tejun Heo 提交于
      For relocatable 32bit kernels, boot/compressed/relocs.c processes
      relocation entries in the kernel image and appends it to the kernel
      image such that boot/compressed/head_32.S can relocate the kernel.
      The kernel image is one statically linked object and only uses two
      relocation types - R_386_PC32 and R_386_32, of the two only the latter
      needs massaging during kernel relocation and thus handled by relocs.
      R_386_PC32 is ignored and all other relocation types are considered
      error.
      
      When the target of a relocation resides in a discarded section,
      binutils doesn't throw away the relocation record but nullifies it by
      changing it to R_386_NONE, which unfortunately makes relocs fail.
      
      The problem was triggered by yet out-of-tree x86 stack unwind patches
      but given the binutils behavior, ignoring R_386_NONE is the right
      thing to do.
      
      The problem has been tracked down to binutils behavior by Jan Beulich.
      
      [ Impact: fix build with certain binutils by ignoring R_386_NONE ]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jan Beulich <JBeulich@novell.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      LKML-Reference: <4A1B8150.40702@kernel.org>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      46176b4f
    • H
      powerpc/mm: Fix broken MMU PID stealing on !SMP · 8e35961b
      Hideo Saito 提交于
      The recent rework of the MMU PID handling for non-hash CPUs has a
      subtle bug in the !SMP "optimized" variant of the PID stealing
      function.  It clears the PID in the mm context before it calls
      local_flush_tlb_mm(). However, the later will not flush anything
      if the PID in the context is clear...
      Signed-off-by: NHideo Saito <hsaito.ppc@gmail.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8e35961b
    • A
      KVM: Fix PDPTR reloading on CR4 writes · a2edf57f
      Avi Kivity 提交于
      The processor is documented to reload the PDPTRs while in PAE mode if any
      of the CR4 bits PSE, PGE, or PAE change.  Linux relies on this
      behaviour when zapping the low mappings of PAE kernels during boot.
      
      The code already handled changes to CR4.PAE; augment it to also notice changes
      to PSE and PGE.
      
      This triggered while booting an F11 PAE kernel; the futex initialization code
      runs before any CR3 reloads and writes to a NULL pointer; the futex subsystem
      ended up uninitialized, killing PI futexes and pulseaudio which uses them.
      
      Cc: stable@kernel.org
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      a2edf57f
    • A
      KVM: Make paravirt tlb flush also reload the PAE PDPTRs · a8cd0244
      Avi Kivity 提交于
      The paravirt tlb flush may be used not only to flush TLBs, but also
      to reload the four page-directory-pointer-table entries, as it is used
      as a replacement for reloading CR3.  Change the code to do the entire
      CR3 reloading dance instead of simply flushing the TLB.
      
      Cc: stable@kernel.org
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      a8cd0244
  8. 25 5月, 2009 1 次提交
  9. 23 5月, 2009 6 次提交
  10. 22 5月, 2009 6 次提交