1. 14 8月, 2009 2 次提交
  2. 28 7月, 2009 1 次提交
    • B
      mm: Pass virtual address to [__]p{te,ud,md}_free_tlb() · 9e1b32ca
      Benjamin Herrenschmidt 提交于
      mm: Pass virtual address to [__]p{te,ud,md}_free_tlb()
      
      Upcoming paches to support the new 64-bit "BookE" powerpc architecture
      will need to have the virtual address corresponding to PTE page when
      freeing it, due to the way the HW table walker works.
      
      Basically, the TLB can be loaded with "large" pages that cover the whole
      virtual space (well, sort-of, half of it actually) represented by a PTE
      page, and which contain an "indirect" bit indicating that this TLB entry
      RPN points to an array of PTEs from which the TLB can then create direct
      entries. Thus, in order to invalidate those when PTE pages are deleted,
      we need the virtual address to pass to tlbilx or tlbivax instructions.
      
      The old trick of sticking it somewhere in the PTE page struct page sucks
      too much, the address is almost readily available in all call sites and
      almost everybody implemets these as macros, so we may as well add the
      argument everywhere. I added it to the pmd and pud variants for consistency.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: David Howells <dhowells@redhat.com> [MN10300 & FRV]
      Acked-by: NNick Piggin <npiggin@suse.de>
      Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> [s390]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9e1b32ca
  3. 11 7月, 2009 1 次提交
  4. 05 7月, 2009 2 次提交
  5. 25 6月, 2009 1 次提交
  6. 21 6月, 2009 1 次提交
  7. 18 6月, 2009 1 次提交
  8. 14 6月, 2009 1 次提交
  9. 13 6月, 2009 1 次提交
  10. 12 6月, 2009 4 次提交
  11. 11 6月, 2009 1 次提交
  12. 09 6月, 2009 1 次提交
  13. 03 6月, 2009 1 次提交
  14. 30 5月, 2009 5 次提交
  15. 29 5月, 2009 3 次提交
    • O
      flat: fix data sections alignment · c3dc5bec
      Oskar Schirmer 提交于
      The flat loader uses an architecture's flat_stack_align() to align the
      stack but assumes word-alignment is enough for the data sections.
      
      However, on the Xtensa S6000 we have registers up to 128bit width
      which can be used from userspace and therefor need userspace stack and
      data-section alignment of at least this size.
      
      This patch drops flat_stack_align() and uses the same alignment that
      is required for slab caches, ARCH_SLAB_MINALIGN, or wordsize if it's
      not defined by the architecture.
      
      It also fixes m32r which was obviously kaput, aligning an
      uninitialized stack entry instead of the stack pointer.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NOskar Schirmer <os@emlix.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Bryan Wu <cooloney@kernel.org>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Acked-by: NPaul Mundt <lethal@linux-sh.org>
      Cc: Greg Ungerer <gerg@uclinux.org>
      Signed-off-by: NJohannes Weiner <jw@emlix.com>
      Acked-by: NMike Frysinger <vapier.adi@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c3dc5bec
    • M
      [ARM] Add cmpxchg support for ARMv6+ systems (v5) · ecd322c9
      Mathieu Desnoyers 提交于
      Add cmpxchg/cmpxchg64 support for ARMv6K and ARMv7 systems
      (original patch from Catalin Marinas <catalin.marinas@arm.com>)
      
      The cmpxchg and cmpxchg64 functions can be implemented using the
      LDREX*/STREX* instructions. Since operand lengths other than 32bit are
      required, the full implementations are only available if the ARMv6K
      extensions are present (for the LDREXB, LDREXH and LDREXD instructions).
      
      For ARMv6, only 32-bits cmpxchg is available.
      
      Mathieu :
      
      Make cmpxchg_local always available with best implementation for all type sizes (1, 2, 4 bytes).
      Make cmpxchg64_local always available.
      
      Use "Ir" constraint for "old" operand, like atomic.h atomic_cmpxchg does.
      
      Change since v3 :
      - Add "memory" clobbers (thanks to Nicolas Pitre)
      - removed __asmeq(), only needed for old compilers, very unlikely on ARMv6+.
      
      Note : ARMv7-M should eventually be ifdefed-out of cmpxchg64. But it's not
      supported by the Linux kernel currently.
      
      Put back arm < v6 cmpxchg support.
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      CC: Catalin Marinas <catalin.marinas@arm.com>
      CC: Nicolas Pitre <nico@cam.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ecd322c9
    • R
      [ARM] barriers: improve xchg, bitops and atomic SMP barriers · bac4e960
      Russell King 提交于
      Mathieu Desnoyers pointed out that the ARM barriers were lacking:
      
      - cmpxchg, xchg and atomic add return need memory barriers on
        architectures which can reorder the relative order in which memory
        read/writes can be seen between CPUs, which seems to include recent
        ARM architectures. Those barriers are currently missing on ARM.
      
      - test_and_xxx_bit were missing SMP barriers.
      
      So put these barriers in.  Provide separate atomic_add/atomic_sub
      operations which do not require barriers.
      Reported-Reviewed-and-Acked-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      bac4e960
  16. 28 5月, 2009 1 次提交
  17. 19 5月, 2009 1 次提交
    • H
      omap iommu: simple virtual address space management · 69d3a84a
      Hiroshi DOYU 提交于
      This patch provides a device drivers, which has a omap iommu, with
      address mapping APIs between device virtual address(iommu), physical
      address and MPU virtual address.
      
      There are 4 possible patterns for iommu virtual address(iova/da) mapping.
      
          |iova/			  mapping		iommu_		page
          | da	pa	va	(d)-(p)-(v)		function	type
        ---------------------------------------------------------------------------
        1 | c		c	c	 1 - 1 - 1	  _kmap() / _kunmap()	s
        2 | c		c,a	c	 1 - 1 - 1	_kmalloc()/ _kfree()	s
        3 | c		d	c	 1 - n - 1	  _vmap() / _vunmap()	s
        4 | c		d,a	c	 1 - n - 1	_vmalloc()/ _vfree()	n*
      
          'iova':	device iommu virtual address
          'da':	alias of 'iova'
          'pa':	physical address
          'va':	mpu virtual address
      
          'c':	contiguous memory area
          'd':	dicontiguous memory area
          'a':	anonymous memory allocation
          '()':	optional feature
      
          'n':	a normal page(4KB) size is used.
          's':	multiple iommu superpage(16MB, 1MB, 64KB, 4KB) size is used.
      
          '*':	not yet, but feasible.
      Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
      69d3a84a
  18. 18 5月, 2009 5 次提交
  19. 17 5月, 2009 1 次提交
  20. 08 5月, 2009 1 次提交
  21. 07 5月, 2009 1 次提交
    • B
      [ARM] VIC: Add power management device · c07f87f2
      Ben Dooks 提交于
      Add power management support to the VIC by registering
      each VIC as a system device to get suspend/resume
      events going.
      
      Since the VIC registeration is done early, we need to
      record the VICs in a static array which is used to add
      the system devices later once the initcalls are run. This
      means there is now a configuration value for the number
      of VICs in the system.
      Signed-off-by: NBen Dooks <ben-linux@fluff.org>
      c07f87f2
  22. 26 4月, 2009 1 次提交
  23. 20 4月, 2009 1 次提交
    • M
      [ARM] 5456/1: add sys_preadv and sys_pwritev · eb8f3142
      Mikael Pettersson 提交于
      Kernel 2.6.30-rc1 added sys_preadv and sys_pwritev to most archs
      but not ARM, resulting in
      
      <stdin>:1421:2: warning: #warning syscall preadv not implemented
      <stdin>:1425:2: warning: #warning syscall pwritev not implemented
      
      This patch adds sys_preadv and sys_pwritev to ARM.
      
      These syscalls simply take five long-sized parameters, so they
      should have no calling-convention/ABI issues in the kernel.
      
      Tested on armv5tel eabi using a preadv/pwritev test program posted
      on linuxppc-dev earlier this month.
      
      It would be nice to get this into the kernel before 2.6.30 final,
      so that glibc's kernel version feature test for these syscalls
      doesn't have to special-case ARM.
      Signed-off-by: NMikael Pettersson <mikpe@it.uu.se>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      eb8f3142
  24. 15 4月, 2009 1 次提交
    • A
      [ARM] 5450/1: Flush only the needed range when unmapping a VMA · 7fccfc00
      Aaro Koskinen 提交于
      When unmapping N pages (e.g. shared memory) the amount of TLB flushes
      done can be (N*PAGE_SIZE/ZAP_BLOCK_SIZE)*N although it should be N at
      maximum. With PREEMPT kernel ZAP_BLOCK_SIZE is 8 pages, so there is a
      noticeable performance penalty when unmapping a large VMA and the system
      is spending its time in flush_tlb_range().
      
      The problem is that tlb_end_vma() is always flushing the full VMA
      range. The subrange that needs to be flushed can be calculated by
      tlb_remove_tlb_entry(). This approach was suggested by Hugh Dickins,
      and is also used by other arches.
      
      The speed increase is roughly 3x for 8M mappings and for larger mappings
      even more.
      Signed-off-by: NAaro Koskinen <Aaro.Koskinen@nokia.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      7fccfc00
  25. 09 4月, 2009 1 次提交