1. 15 8月, 2009 1 次提交
    • M
      ARM: 5677/1: ARM support for TIF_RESTORE_SIGMASK/pselect6/ppoll/epoll_pwait · 36984265
      Mikael Pettersson 提交于
      This patch adds support for TIF_RESTORE_SIGMASK to ARM's
      signal handling, which allows to hook up the pselect6, ppoll,
      and epoll_pwait syscalls on ARM.
      
      Tested here with eabi userspace and a test program with a
      deliberate race between a child's exit and the parent's
      sigprocmask/select sequence. Using sys_pselect6() instead
      of sigprocmask/select reliably prevents the race.
      
      The other arch's support for TIF_RESTORE_SIGMASK has evolved
      over time:
      
      In 2.6.16:
      - add TIF_RESTORE_SIGMASK which parallels TIF_SIGPENDING
      - test both when checking for pending signal [changed later]
      - reimplement sys_sigsuspend() to use current->saved_sigmask,
        TIF_RESTORE_SIGMASK [changed later], and -ERESTARTNOHAND;
        ditto for sys_rt_sigsuspend(), but drop private code and
        use common code via __ARCH_WANT_SYS_RT_SIGSUSPEND;
      - there are now no "extra" calls to do_signal() so its oldset
        parameter is always &current->blocked so need not be passed,
        also its return value is changed to void
      - change handle_signal() to return 0/-errno
      - change do_signal() to honor TIF_RESTORE_SIGMASK:
        + get oldset from current->saved_sigmask if TIF_RESTORE_SIGMASK
          is set
        + if handle_signal() was successful then clear TIF_RESTORE_SIGMASK
        + if no signal was delivered and TIF_RESTORE_SIGMASK is set then
          clear it and restore the sigmask
      - hook up sys_pselect6() and sys_ppoll()
      
      In 2.6.19:
      - hook up sys_epoll_pwait()
      
      In 2.6.26:
      - allow archs to override how TIF_RESTORE_SIGMASK is implemented;
        default set_restore_sigmask() sets both TIF_RESTORE_SIGMASK and
        TIF_SIGPENDING; archs need now just test TIF_SIGPENDING again
        when checking for pending signal work; some archs now implement
        TIF_RESTORE_SIGMASK as a secondary/non-atomic thread flag bit
      - call set_restore_sigmask() in sys_sigsuspend() instead of setting
        TIF_RESTORE_SIGMASK
      
      In 2.6.29-rc:
      - kill sys_pselect7() which no arch wanted
      
      So for 2.6.31-rc6/ARM this patch does the following:
      - Add TIF_RESTORE_SIGMASK. Use the generic set_restore_sigmask()
        which sets both TIF_SIGPENDING and TIF_RESTORE_SIGMASK, so
        TIF_RESTORE_SIGMASK need not claim one of the scarce low thread
        flags, and existing TIF_SIGPENDING and _TIF_WORK_MASK tests need
        not be extended for TIF_RESTORE_SIGMASK.
      - sys_sigsuspend() is reimplemented to use current->saved_sigmask
        and set_restore_sigmask(), making it identical to most other archs
      - The private code for sys_rt_sigsuspend() is removed, instead
        generic code supplies it via __ARCH_WANT_SYS_RT_SIGSUSPEND.
      - sys_sigsuspend() and sys_rt_sigsuspend() no longer need a pt_regs
        parameter, so their assembly code wrappers are removed.
      - handle_signal() is changed to return 0 on success or -errno.
      - The oldset parameter to do_signal() is now redundant and removed,
        and the return value is now also redundant and changed to void.
      - do_signal() is changed to honor TIF_RESTORE_SIGMASK:
        + get oldset from current->saved_sigmask if TIF_RESTORE_SIGMASK
          is set
        + if handle_signal() was successful then clear TIF_RESTORE_SIGMASK
        + if no signal was delivered and TIF_RESTORE_SIGMASK is set then
          clear it and restore the sigmask
      - Hook up sys_pselect6, sys_ppoll, and sys_epoll_pwait.
      Signed-off-by: NMikael Pettersson <mikpe@it.uu.se>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      36984265
  2. 22 7月, 2009 1 次提交
    • U
      [ARM] 5613/1: implement CALLER_ADDRESSx · 4bf1fa5a
      Uwe Kleine-König 提交于
      From: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
      
      As __builtin_return_address(n) doesn't work for ARM with n > 0, the
      kernel needs its own implementation.
      
      This fixes many warnings saying:
      
      	warning: unsupported argument to '__builtin_return_address'
      
      The new methods and walk_stackframe must not be instrumented because
      CALLER_ADDRESSx is used in the various tracers and tracing the tracer is
      a bad idea.
      
      What's currently missing is an implementation using unwind tables.  This
      is not fatal though, it's just that the tracers don't get enough
      information to be really useful.
      
      Note that if both ARM_UNWIND and FRAME_POINTER are enabled,
      walk_stackframe uses unwind information.  So in this case the same
      implementation is used as when FRAME_POINTER is disabled.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4bf1fa5a
  3. 11 7月, 2009 3 次提交
  4. 05 7月, 2009 2 次提交
  5. 25 6月, 2009 1 次提交
  6. 21 6月, 2009 1 次提交
  7. 18 6月, 2009 1 次提交
  8. 14 6月, 2009 1 次提交
  9. 13 6月, 2009 1 次提交
  10. 12 6月, 2009 4 次提交
  11. 11 6月, 2009 1 次提交
  12. 09 6月, 2009 1 次提交
  13. 03 6月, 2009 1 次提交
  14. 30 5月, 2009 5 次提交
  15. 29 5月, 2009 3 次提交
    • O
      flat: fix data sections alignment · c3dc5bec
      Oskar Schirmer 提交于
      The flat loader uses an architecture's flat_stack_align() to align the
      stack but assumes word-alignment is enough for the data sections.
      
      However, on the Xtensa S6000 we have registers up to 128bit width
      which can be used from userspace and therefor need userspace stack and
      data-section alignment of at least this size.
      
      This patch drops flat_stack_align() and uses the same alignment that
      is required for slab caches, ARCH_SLAB_MINALIGN, or wordsize if it's
      not defined by the architecture.
      
      It also fixes m32r which was obviously kaput, aligning an
      uninitialized stack entry instead of the stack pointer.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NOskar Schirmer <os@emlix.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Bryan Wu <cooloney@kernel.org>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Acked-by: NPaul Mundt <lethal@linux-sh.org>
      Cc: Greg Ungerer <gerg@uclinux.org>
      Signed-off-by: NJohannes Weiner <jw@emlix.com>
      Acked-by: NMike Frysinger <vapier.adi@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c3dc5bec
    • M
      [ARM] Add cmpxchg support for ARMv6+ systems (v5) · ecd322c9
      Mathieu Desnoyers 提交于
      Add cmpxchg/cmpxchg64 support for ARMv6K and ARMv7 systems
      (original patch from Catalin Marinas <catalin.marinas@arm.com>)
      
      The cmpxchg and cmpxchg64 functions can be implemented using the
      LDREX*/STREX* instructions. Since operand lengths other than 32bit are
      required, the full implementations are only available if the ARMv6K
      extensions are present (for the LDREXB, LDREXH and LDREXD instructions).
      
      For ARMv6, only 32-bits cmpxchg is available.
      
      Mathieu :
      
      Make cmpxchg_local always available with best implementation for all type sizes (1, 2, 4 bytes).
      Make cmpxchg64_local always available.
      
      Use "Ir" constraint for "old" operand, like atomic.h atomic_cmpxchg does.
      
      Change since v3 :
      - Add "memory" clobbers (thanks to Nicolas Pitre)
      - removed __asmeq(), only needed for old compilers, very unlikely on ARMv6+.
      
      Note : ARMv7-M should eventually be ifdefed-out of cmpxchg64. But it's not
      supported by the Linux kernel currently.
      
      Put back arm < v6 cmpxchg support.
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      CC: Catalin Marinas <catalin.marinas@arm.com>
      CC: Nicolas Pitre <nico@cam.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ecd322c9
    • R
      [ARM] barriers: improve xchg, bitops and atomic SMP barriers · bac4e960
      Russell King 提交于
      Mathieu Desnoyers pointed out that the ARM barriers were lacking:
      
      - cmpxchg, xchg and atomic add return need memory barriers on
        architectures which can reorder the relative order in which memory
        read/writes can be seen between CPUs, which seems to include recent
        ARM architectures. Those barriers are currently missing on ARM.
      
      - test_and_xxx_bit were missing SMP barriers.
      
      So put these barriers in.  Provide separate atomic_add/atomic_sub
      operations which do not require barriers.
      Reported-Reviewed-and-Acked-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      bac4e960
  16. 28 5月, 2009 1 次提交
  17. 19 5月, 2009 1 次提交
    • H
      omap iommu: simple virtual address space management · 69d3a84a
      Hiroshi DOYU 提交于
      This patch provides a device drivers, which has a omap iommu, with
      address mapping APIs between device virtual address(iommu), physical
      address and MPU virtual address.
      
      There are 4 possible patterns for iommu virtual address(iova/da) mapping.
      
          |iova/			  mapping		iommu_		page
          | da	pa	va	(d)-(p)-(v)		function	type
        ---------------------------------------------------------------------------
        1 | c		c	c	 1 - 1 - 1	  _kmap() / _kunmap()	s
        2 | c		c,a	c	 1 - 1 - 1	_kmalloc()/ _kfree()	s
        3 | c		d	c	 1 - n - 1	  _vmap() / _vunmap()	s
        4 | c		d,a	c	 1 - n - 1	_vmalloc()/ _vfree()	n*
      
          'iova':	device iommu virtual address
          'da':	alias of 'iova'
          'pa':	physical address
          'va':	mpu virtual address
      
          'c':	contiguous memory area
          'd':	dicontiguous memory area
          'a':	anonymous memory allocation
          '()':	optional feature
      
          'n':	a normal page(4KB) size is used.
          's':	multiple iommu superpage(16MB, 1MB, 64KB, 4KB) size is used.
      
          '*':	not yet, but feasible.
      Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
      69d3a84a
  18. 18 5月, 2009 5 次提交
  19. 17 5月, 2009 1 次提交
  20. 08 5月, 2009 1 次提交
  21. 07 5月, 2009 1 次提交
    • B
      [ARM] VIC: Add power management device · c07f87f2
      Ben Dooks 提交于
      Add power management support to the VIC by registering
      each VIC as a system device to get suspend/resume
      events going.
      
      Since the VIC registeration is done early, we need to
      record the VICs in a static array which is used to add
      the system devices later once the initcalls are run. This
      means there is now a configuration value for the number
      of VICs in the system.
      Signed-off-by: NBen Dooks <ben-linux@fluff.org>
      c07f87f2
  22. 26 4月, 2009 1 次提交
  23. 20 4月, 2009 1 次提交
    • M
      [ARM] 5456/1: add sys_preadv and sys_pwritev · eb8f3142
      Mikael Pettersson 提交于
      Kernel 2.6.30-rc1 added sys_preadv and sys_pwritev to most archs
      but not ARM, resulting in
      
      <stdin>:1421:2: warning: #warning syscall preadv not implemented
      <stdin>:1425:2: warning: #warning syscall pwritev not implemented
      
      This patch adds sys_preadv and sys_pwritev to ARM.
      
      These syscalls simply take five long-sized parameters, so they
      should have no calling-convention/ABI issues in the kernel.
      
      Tested on armv5tel eabi using a preadv/pwritev test program posted
      on linuxppc-dev earlier this month.
      
      It would be nice to get this into the kernel before 2.6.30 final,
      so that glibc's kernel version feature test for these syscalls
      doesn't have to special-case ARM.
      Signed-off-by: NMikael Pettersson <mikpe@it.uu.se>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      eb8f3142
  24. 15 4月, 2009 1 次提交
    • A
      [ARM] 5450/1: Flush only the needed range when unmapping a VMA · 7fccfc00
      Aaro Koskinen 提交于
      When unmapping N pages (e.g. shared memory) the amount of TLB flushes
      done can be (N*PAGE_SIZE/ZAP_BLOCK_SIZE)*N although it should be N at
      maximum. With PREEMPT kernel ZAP_BLOCK_SIZE is 8 pages, so there is a
      noticeable performance penalty when unmapping a large VMA and the system
      is spending its time in flush_tlb_range().
      
      The problem is that tlb_end_vma() is always flushing the full VMA
      range. The subrange that needs to be flushed can be calculated by
      tlb_remove_tlb_entry(). This approach was suggested by Hugh Dickins,
      and is also used by other arches.
      
      The speed increase is roughly 3x for 8M mappings and for larger mappings
      even more.
      Signed-off-by: NAaro Koskinen <Aaro.Koskinen@nokia.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      7fccfc00