1. 18 5月, 2009 4 次提交
  2. 17 5月, 2009 1 次提交
  3. 08 5月, 2009 1 次提交
  4. 20 4月, 2009 1 次提交
    • M
      [ARM] 5456/1: add sys_preadv and sys_pwritev · eb8f3142
      Mikael Pettersson 提交于
      Kernel 2.6.30-rc1 added sys_preadv and sys_pwritev to most archs
      but not ARM, resulting in
      
      <stdin>:1421:2: warning: #warning syscall preadv not implemented
      <stdin>:1425:2: warning: #warning syscall pwritev not implemented
      
      This patch adds sys_preadv and sys_pwritev to ARM.
      
      These syscalls simply take five long-sized parameters, so they
      should have no calling-convention/ABI issues in the kernel.
      
      Tested on armv5tel eabi using a preadv/pwritev test program posted
      on linuxppc-dev earlier this month.
      
      It would be nice to get this into the kernel before 2.6.30 final,
      so that glibc's kernel version feature test for these syscalls
      doesn't have to special-case ARM.
      Signed-off-by: NMikael Pettersson <mikpe@it.uu.se>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      eb8f3142
  5. 15 4月, 2009 1 次提交
    • A
      [ARM] 5450/1: Flush only the needed range when unmapping a VMA · 7fccfc00
      Aaro Koskinen 提交于
      When unmapping N pages (e.g. shared memory) the amount of TLB flushes
      done can be (N*PAGE_SIZE/ZAP_BLOCK_SIZE)*N although it should be N at
      maximum. With PREEMPT kernel ZAP_BLOCK_SIZE is 8 pages, so there is a
      noticeable performance penalty when unmapping a large VMA and the system
      is spending its time in flush_tlb_range().
      
      The problem is that tlb_end_vma() is always flushing the full VMA
      range. The subrange that needs to be flushed can be calculated by
      tlb_remove_tlb_entry(). This approach was suggested by Hugh Dickins,
      and is also used by other arches.
      
      The speed increase is roughly 3x for 8M mappings and for larger mappings
      even more.
      Signed-off-by: NAaro Koskinen <Aaro.Koskinen@nokia.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      7fccfc00
  6. 09 4月, 2009 1 次提交
  7. 03 4月, 2009 1 次提交
  8. 25 3月, 2009 3 次提交
  9. 23 3月, 2009 1 次提交
    • E
      [ARM] pxa: add base support for Marvell's PXA168 processor line · 49cbe786
      Eric Miao 提交于
      """The Marvell® PXA168 processor is the first in a family of application
      processors targeted at mass market opportunities in computing and consumer
      devices. It balances high computing and multimedia performance with low
      power consumption to support extended battery life, and includes a wealth
      of integrated peripherals to reduce overall BOM cost .... """
      
      See http://www.marvell.com/featured/pxa168.jsp for more information.
      
        1. Marvell Mohawk core is a hybrid of xscale3 and its own ARM core,
           there are many enhancements like instructions for flushing the
           whole D-cache, and so on
      
        2. Clock reuses Russell's common clkdev, and added the basic support
           for UART1/2.
      
        3. Devices are a bit different from the 'mach-pxa' way, the platform
           devices are now dynamically allocated only when necessary (i.e.
           when pxa_register_device() is called). Description for each device
           are stored in an array of 'struct pxa_device_desc'. Now that:
      
           a. this array of device description is marked with __initdata and
              can be freed up system is fully up
      
           b. which means board code has to add all needed devices early in
              his initializing function
      
           c. platform specific data can now be marked as __initdata since
              they are allocated and copied by platform_device_add_data()
      
        4. only the basic UART1/2/3 are added, more devices will come later.
      Signed-off-by: NJason Chagas <chagas@marvell.com>
      Signed-off-by: NEric Miao <eric.miao@marvell.com>
      49cbe786
  10. 21 3月, 2009 1 次提交
  11. 20 3月, 2009 1 次提交
    • R
      [ARM] pass reboot command line to arch_reset() · be093beb
      Russell King 提交于
      OMAP wishes to pass state to the boot loader upon reboot in order to
      instruct it whether to wait for USB-based reflashing or not.  There is
      already a facility to do this via the reboot() syscall, except we ignore
      the string passed to machine_restart().
      
      This patch fixes things to pass this string to arch_reset().  This means
      that we keep the reboot mode limited to telling the kernel _how_ to
      perform the reboot which should be independent of what we request the
      boot loader to do.
      Acked-by: NTony Lindgren <tony@atomide.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      be093beb
  12. 17 3月, 2009 1 次提交
  13. 16 3月, 2009 5 次提交
    • N
      [ARM] Feroceon: add highmem support to L2 cache handling code · 1bb77267
      Nicolas Pitre 提交于
      The choice is between looping over the physical range and performing
      single cache line operations, or to map highmem pages somewhere, as
      cache range ops are possible only on virtual addresses.
      
      Because L2 range ops are much faster, we go with the later by factoring
      the physical-to-virtual address conversion and use a fixmap entry for it
      in the HIGHMEM case.
      
      Possible future optimizations to avoid the pte setup cost:
      
       - do the pte setup for highmem pages only
      
       - determine a threshold for doing a line-by-line processing on physical
         addresses when the range is small
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      1bb77267
    • N
      [ARM] make page_to_dma() highmem aware · 58edb515
      Nicolas Pitre 提交于
      If a machine class has a custom __virt_to_bus() implementation then it
      must provide a __arch_page_to_dma() implementation as well which is
      _not_ based on page_address() to support highmem.
      
      This patch fixes existing __arch_page_to_dma() and provide a default
      implementation otherwise.  The default implementation for highmem is
      based on __pfn_to_bus() which is defined only when no custom
      __virt_to_bus() is provided by the machine class.
      
      That leaves only ebsa110 and footbridge which cannot support highmem
      until they provide their own __arch_page_to_dma() implementation.
      But highmem support on those legacy platforms with limited memory is
      certainly not a priority.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      58edb515
    • N
      [ARM] introduce dma_cache_maint_page() · 43377453
      Nicolas Pitre 提交于
      This is a helper to be used by the DMA mapping API to handle cache
      maintenance for memory identified by a page structure instead of a
      virtual address.  Those pages may or may not be highmem pages, and
      when they're highmem pages, they may or may not be virtually mapped.
      When they're not mapped then there is no L1 cache to worry about. But
      even in that case the L2 cache must be processed since unmapped highmem
      pages can still be L2 cached.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      43377453
    • N
      [ARM] kmap support · d73cd428
      Nicolas Pitre 提交于
      The kmap virtual area borrows a 2MB range at the top of the 16MB area
      below PAGE_OFFSET currently reserved for kernel modules and/or the
      XIP kernel.  This 2MB corresponds to the range covered by 2 consecutive
      second-level page tables, or a single pmd entry as seen by the Linux
      page table abstraction.  Because XIP kernels are unlikely to be seen
      on systems needing highmem support, there shouldn't be any shortage of
      VM space for modules (14 MB for modules is still way more than twice the
      typical usage).
      
      Because the virtual mapping of highmem pages can go away at any moment
      after kunmap() is called on them, we need to bypass the delayed cache
      flushing provided by flush_dcache_page() in that case.
      
      The atomic kmap versions are based on fixmaps, and
      __cpuc_flush_dcache_page() is used directly in that case.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      d73cd428
    • N
      [ARM] fixmap support · 5f0fbf9e
      Nicolas Pitre 提交于
      This is the minimum fixmap interface expected to be implemented by
      architectures supporting highmem.
      
      We have a second level page table already allocated and covering
      0xfff00000-0xffffffff because the exception vector page is located
      at 0xffff0000, and various cache tricks already use some entries above
      0xffff0000.  Therefore the PTEs covering 0xfff00000-0xfffeffff are free
      to be used.
      
      However the XScale cache flushing code already uses virtual addresses
      between 0xfffe0000 and 0xfffeffff.
      
      So this reserves the 0xfff00000-0xfffdffff range for fixmap stuff.
      
      The Documentation/arm/memory.txt information is updated accordingly,
      including the information about the actual top of DMA memory mapping
      region which didn't match the code.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      5f0fbf9e
  14. 13 3月, 2009 1 次提交
    • P
      [ARM] 5422/1: ARM: MMU: add a Non-cacheable Normal executable memory type · e4707dd3
      Paul Walmsley 提交于
      This patch adds a Non-cacheable Normal ARM executable memory type,
      MT_MEMORY_NONCACHED.
      
      On OMAP3, this is used for rapid dynamic voltage/frequency scaling in
      the VDD2 voltage domain. OMAP3's SDRAM controller (SDRC) is in the
      VDD2 voltage domain, and its clock frequency must change along with
      voltage. The SDRC clock change code cannot run from SDRAM itself,
      since SDRAM accesses are paused during the clock change. So the
      current implementation of the DVFS code executes from OMAP on-chip
      SRAM, aka "OCM RAM."
      
      If the OCM RAM pages are marked as Cacheable, the ARM cache controller
      will attempt to flush dirty cache lines to the SDRC, so it can fill
      those lines with OCM RAM instruction code. The problem is that the
      SDRC is paused during DVFS, and so any SDRAM access causes the ARM MPU
      subsystem to hang.
      
      TI's original solution to this problem was to mark the OCM RAM
      sections as Strongly Ordered memory, thus preventing caching. This is
      overkill: since the memory is marked as non-bufferable, OCM RAM writes
      become needlessly slow. The idea of "Strongly Ordered SRAM" is also
      conceptually disturbing. Previous LAKML list discussion is here:
      
      http://www.spinics.net/lists/arm-kernel/msg54312.html
      
      This memory type MT_MEMORY_NONCACHED is used for OCM RAM by a future
      patch.
      
      Cc: Richard Woodruff <r-woodruff2@ti.com>
      Signed-off-by: NPaul Walmsley <paul@pwsan.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      e4707dd3
  15. 22 2月, 2009 1 次提交
  16. 19 2月, 2009 2 次提交
  17. 16 2月, 2009 1 次提交
    • P
      net: new user space API for time stamping of incoming and outgoing packets · cb9eff09
      Patrick Ohly 提交于
      User space can request hardware and/or software time stamping.
      Reporting of the result(s) via a new control message is enabled
      separately for each field in the message because some of the
      fields may require additional computation and thus cause overhead.
      User space can tell the different kinds of time stamps apart
      and choose what suits its needs.
      
      When a TX timestamp operation is requested, the TX skb will be cloned
      and the clone will be time stamped (in hardware or software) and added
      to the socket error queue of the skb, if the skb has a socket
      associated with it.
      
      The actual TX timestamp will reach userspace as a RX timestamp on the
      cloned packet. If timestamping is requested and no timestamping is
      done in the device driver (potentially this may use hardware
      timestamping), it will be done in software after the device's
      start_hard_xmit routine.
      Signed-off-by: NPatrick Ohly <patrick.ohly@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cb9eff09
  18. 12 2月, 2009 4 次提交
  19. 01 2月, 2009 3 次提交
  20. 15 1月, 2009 1 次提交
  21. 08 1月, 2009 2 次提交
    • D
      NOMMU: Make VMAs per MM as for MMU-mode linux · 8feae131
      David Howells 提交于
      Make VMAs per mm_struct as for MMU-mode linux.  This solves two problems:
      
       (1) In SYSV SHM where nattch for a segment does not reflect the number of
           shmat's (and forks) done.
      
       (2) In mmap() where the VMA's vm_mm is set to point to the parent mm by an
           exec'ing process when VM_EXECUTABLE is specified, regardless of the fact
           that a VMA might be shared and already have its vm_mm assigned to another
           process or a dead process.
      
      A new struct (vm_region) is introduced to track a mapped region and to remember
      the circumstances under which it may be shared and the vm_list_struct structure
      is discarded as it's no longer required.
      
      This patch makes the following additional changes:
      
       (1) Regions are now allocated with alloc_pages() rather than kmalloc() and
           with no recourse to __GFP_COMP, so the pages are not composite.  Instead,
           each page has a reference on it held by the region.  Anything else that is
           interested in such a page will have to get a reference on it to retain it.
           When the pages are released due to unmapping, each page is passed to
           put_page() and will be freed when the page usage count reaches zero.
      
       (2) Excess pages are trimmed after an allocation as the allocation must be
           made as a power-of-2 quantity of pages.
      
       (3) VMAs are added to the parent MM's R/B tree and mmap lists.  As an MM may
           end up with overlapping VMAs within the tree, the VMA struct address is
           appended to the sort key.
      
       (4) Non-anonymous VMAs are now added to the backing inode's prio list.
      
       (5) Holes may be punched in anonymous VMAs with munmap(), releasing parts of
           the backing region.  The VMA and region structs will be split if
           necessary.
      
       (6) sys_shmdt() only releases one attachment to a SYSV IPC shared memory
           segment instead of all the attachments at that addresss.  Multiple
           shmat()'s return the same address under NOMMU-mode instead of different
           virtual addresses as under MMU-mode.
      
       (7) Core dumping for ELF-FDPIC requires fewer exceptions for NOMMU-mode.
      
       (8) /proc/maps is now the global list of mapped regions, and may list bits
           that aren't actually mapped anywhere.
      
       (9) /proc/meminfo gains a line (tagged "MmapCopy") that indicates the amount
           of RAM currently allocated by mmap to hold mappable regions that can't be
           mapped directly.  These are copies of the backing device or file if not
           anonymous.
      
      These changes make NOMMU mode more similar to MMU mode.  The downside is that
      NOMMU mode requires some extra memory to track things over NOMMU without this
      patch (VMAs are no longer shared, and there are now region structs).
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Tested-by: NMike Frysinger <vapier.adi@gmail.com>
      Acked-by: NPaul Mundt <lethal@linux-sh.org>
      8feae131
    • B
      PCI: arm: use generic INTx swizzle from PCI core · 06df6993
      Bjorn Helgaas 提交于
      Use the generic pci_common_swizzle() instead of arch-specific code.
      Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      06df6993
  22. 07 1月, 2009 2 次提交
  23. 02 1月, 2009 1 次提交
新手
引导
客服 返回
顶部