1. 16 3月, 2009 3 次提交
    • N
      [ARM] introduce dma_cache_maint_page() · 43377453
      Nicolas Pitre 提交于
      This is a helper to be used by the DMA mapping API to handle cache
      maintenance for memory identified by a page structure instead of a
      virtual address.  Those pages may or may not be highmem pages, and
      when they're highmem pages, they may or may not be virtually mapped.
      When they're not mapped then there is no L1 cache to worry about. But
      even in that case the L2 cache must be processed since unmapped highmem
      pages can still be L2 cached.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      43377453
    • N
      [ARM] kmap support · d73cd428
      Nicolas Pitre 提交于
      The kmap virtual area borrows a 2MB range at the top of the 16MB area
      below PAGE_OFFSET currently reserved for kernel modules and/or the
      XIP kernel.  This 2MB corresponds to the range covered by 2 consecutive
      second-level page tables, or a single pmd entry as seen by the Linux
      page table abstraction.  Because XIP kernels are unlikely to be seen
      on systems needing highmem support, there shouldn't be any shortage of
      VM space for modules (14 MB for modules is still way more than twice the
      typical usage).
      
      Because the virtual mapping of highmem pages can go away at any moment
      after kunmap() is called on them, we need to bypass the delayed cache
      flushing provided by flush_dcache_page() in that case.
      
      The atomic kmap versions are based on fixmaps, and
      __cpuc_flush_dcache_page() is used directly in that case.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      d73cd428
    • N
      [ARM] fixmap support · 5f0fbf9e
      Nicolas Pitre 提交于
      This is the minimum fixmap interface expected to be implemented by
      architectures supporting highmem.
      
      We have a second level page table already allocated and covering
      0xfff00000-0xffffffff because the exception vector page is located
      at 0xffff0000, and various cache tricks already use some entries above
      0xffff0000.  Therefore the PTEs covering 0xfff00000-0xfffeffff are free
      to be used.
      
      However the XScale cache flushing code already uses virtual addresses
      between 0xfffe0000 and 0xfffeffff.
      
      So this reserves the 0xfff00000-0xfffdffff range for fixmap stuff.
      
      The Documentation/arm/memory.txt information is updated accordingly,
      including the information about the actual top of DMA memory mapping
      region which didn't match the code.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      5f0fbf9e
  2. 15 1月, 2009 1 次提交
  3. 08 1月, 2009 2 次提交
    • D
      NOMMU: Make VMAs per MM as for MMU-mode linux · 8feae131
      David Howells 提交于
      Make VMAs per mm_struct as for MMU-mode linux.  This solves two problems:
      
       (1) In SYSV SHM where nattch for a segment does not reflect the number of
           shmat's (and forks) done.
      
       (2) In mmap() where the VMA's vm_mm is set to point to the parent mm by an
           exec'ing process when VM_EXECUTABLE is specified, regardless of the fact
           that a VMA might be shared and already have its vm_mm assigned to another
           process or a dead process.
      
      A new struct (vm_region) is introduced to track a mapped region and to remember
      the circumstances under which it may be shared and the vm_list_struct structure
      is discarded as it's no longer required.
      
      This patch makes the following additional changes:
      
       (1) Regions are now allocated with alloc_pages() rather than kmalloc() and
           with no recourse to __GFP_COMP, so the pages are not composite.  Instead,
           each page has a reference on it held by the region.  Anything else that is
           interested in such a page will have to get a reference on it to retain it.
           When the pages are released due to unmapping, each page is passed to
           put_page() and will be freed when the page usage count reaches zero.
      
       (2) Excess pages are trimmed after an allocation as the allocation must be
           made as a power-of-2 quantity of pages.
      
       (3) VMAs are added to the parent MM's R/B tree and mmap lists.  As an MM may
           end up with overlapping VMAs within the tree, the VMA struct address is
           appended to the sort key.
      
       (4) Non-anonymous VMAs are now added to the backing inode's prio list.
      
       (5) Holes may be punched in anonymous VMAs with munmap(), releasing parts of
           the backing region.  The VMA and region structs will be split if
           necessary.
      
       (6) sys_shmdt() only releases one attachment to a SYSV IPC shared memory
           segment instead of all the attachments at that addresss.  Multiple
           shmat()'s return the same address under NOMMU-mode instead of different
           virtual addresses as under MMU-mode.
      
       (7) Core dumping for ELF-FDPIC requires fewer exceptions for NOMMU-mode.
      
       (8) /proc/maps is now the global list of mapped regions, and may list bits
           that aren't actually mapped anywhere.
      
       (9) /proc/meminfo gains a line (tagged "MmapCopy") that indicates the amount
           of RAM currently allocated by mmap to hold mappable regions that can't be
           mapped directly.  These are copies of the backing device or file if not
           anonymous.
      
      These changes make NOMMU mode more similar to MMU mode.  The downside is that
      NOMMU mode requires some extra memory to track things over NOMMU without this
      patch (VMAs are no longer shared, and there are now region structs).
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Tested-by: NMike Frysinger <vapier.adi@gmail.com>
      Acked-by: NPaul Mundt <lethal@linux-sh.org>
      8feae131
    • B
      PCI: arm: use generic INTx swizzle from PCI core · 06df6993
      Bjorn Helgaas 提交于
      Use the generic pci_common_swizzle() instead of arch-specific code.
      Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      06df6993
  4. 07 1月, 2009 2 次提交
  5. 16 12月, 2008 1 次提交
  6. 14 12月, 2008 1 次提交
  7. 13 12月, 2008 1 次提交
  8. 06 12月, 2008 1 次提交
    • L
      [ARM] 5340/1: fix stack placement after noexecstack changes · 794baba6
      Lennert Buytenhek 提交于
      Commit 8ec53663 ("[ARM] Improve
      non-executable support") added support for detecting non-executable
      stack binaries.  One of the things it does is to make READ_IMPLIES_EXEC
      be set in ->personality if we are running on a CPU that doesn't support
      the XN ("Execute Never") page table bit or if we are running a binary
      that needs an executable stack.
      
      This exposed a latent bug in ARM's asm/processor.h due to which we'll
      end up placing the stack at a very low address, where it will bump into
      the heap on any application that uses significant amount of stack or
      heap or both, causing many interesting crashes.
      
      Fix this by testing the ADDR_LIMIT_32BIT bit in ->personality instead
      of testing for equality against PER_LINUX_32BIT.
      Reviewed-by: NNicolas Pitre <nico@marvell.com>
      Signed-off-by: NLennert Buytenhek <buytenh@marvell.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      794baba6
  9. 04 12月, 2008 1 次提交
    • N
      [ARM] 5339/1: fix __fls() on ARM · 94fc7336
      Nicolas Pitre 提交于
      Commit 0c65f459 intended to fix truncation issues with fls() on
      ARMv5+ by renaming it to __fls() and wrapping it into a C function.
      However that didn't take into account the fact that __fls() already
      already had different semantics in the kernel.
      
      Let's move the __fls() code into fls() function directly, and redefine
      __fls() with the appropriate semantics.  While at it, bring a generic
      __fls() definition for pre ARMv5 too.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      94fc7336
  10. 01 12月, 2008 1 次提交
  11. 30 11月, 2008 5 次提交
  12. 29 11月, 2008 2 次提交
  13. 28 11月, 2008 5 次提交
  14. 27 11月, 2008 3 次提交
  15. 13 11月, 2008 1 次提交
  16. 12 11月, 2008 1 次提交
  17. 09 11月, 2008 1 次提交
    • R
      [ARM] iop: iop3xx needs registers mapped uncached+unbuffered · ebb4c658
      Russell King 提交于
      Mikael Pettersson reported:
      
         The 2.6.28-rc kernels fail to detect PCI device 0000:00:01.0
         (the first ethernet port) on my Thecus n2100 XScale box.
      
         There is however still a strange "ghost" device that gets partially
         detected in 2.6.28-rc2 vanilla.
      
      The IOP321 manual says:
      
        The user designates the memory region containing the OCCDR as
        non-cacheable and non-bufferable from the IntelR XScaleTM core.
        This guarantees that all load/stores to the OCCDR are only of
        DWORD quantities.
      
      Ensure that the OCCDR is so mapped.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ebb4c658
  18. 07 11月, 2008 2 次提交
  19. 06 11月, 2008 2 次提交
  20. 03 11月, 2008 1 次提交
  21. 31 10月, 2008 2 次提交
    • S
      ftrace: nmi safe code clean ups · a26a2a27
      Steven Rostedt 提交于
      Impact: cleanup
      
      This patch cleans up the NMI safe code for dynamic ftrace as suggested
      by Andrew Morton.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a26a2a27
    • S
      ftrace: nmi safe code modification · 17666f02
      Steven Rostedt 提交于
      Impact: fix crashes that can occur in NMI handlers, if their code is modified
      
      Modifying code is something that needs special care. On SMP boxes,
      if code that is being modified is also being executed on another CPU,
      that CPU will have undefined results.
      
      The dynamic ftrace uses kstop_machine to make the system act like a
      uniprocessor system. But this does not address NMIs, that can still
      run on other CPUs.
      
      One approach to handle this is to make all code that are used by NMIs
      not be traced. But NMIs can call notifiers that spread throughout the
      kernel and this will be very hard to maintain, and the chance of missing
      a function is very high.
      
      The approach that this patch takes is to have the NMIs modify the code
      if the modification is taking place. The way this works is that just
      writing to code executing on another CPU is not harmful if what is
      written is the same as what exists.
      
      Two buffers are used: an IP buffer and a "code" buffer.
      
      The steps that the patcher takes are:
      
       1) Put in the instruction pointer into the IP buffer
          and the new code into the "code" buffer.
       2) Set a flag that says we are modifying code
       3) Wait for any running NMIs to finish.
       4) Write the code
       5) clear the flag.
       6) Wait for any running NMIs to finish.
      
      If an NMI is executed, it will also write the pending code.
      Multiple writes are OK, because what is being written is the same.
      Then the patcher must wait for all running NMIs to finish before
      going to the next line that must be patched.
      
      This is basically the RCU approach to code modification.
      
      Thanks to Ingo Molnar for suggesting the idea, and to Arjan van de Ven
      for his guidence on what is safe and what is not.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      17666f02
  22. 23 10月, 2008 1 次提交