1. 16 3月, 2009 9 次提交
    • N
      [ARM] add CONFIG_HIGHMEM option · 053a96ca
      Nicolas Pitre 提交于
      Here it is... HIGHMEM for the ARM architecture.  :-)
      
      If you don't have enough ram for highmem pages to be allocated and still
      want to test this, then the cmdline option "vmalloc=" can be used with
      a value large enough to force the highmem threshold down.
      
      Successfully tested on a Marvell DB-78x00-BP Development Board with
      2 GB of RAM.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      053a96ca
    • N
      [ARM] ignore high memory with VIPT aliasing caches · 3f973e22
      Nicolas Pitre 提交于
      VIPT aliasing caches have issues of their own which are not yet handled.
      Usage of discard_old_kernel_data() in copypage-v6.c is not highmem ready,
      kmap/fixmap stuff doesn't take account of cache colouring, etc.
      If/when those issues are handled then this could be reverted.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      3f973e22
    • N
      [ARM] xsc3: add highmem support to L2 cache handling code · 3902a15e
      Nicolas Pitre 提交于
      On xsc3, L2 cache ops are possible only on virtual addresses.  The code
      is rearranged so to have a linear progression requiring the least amount
      of pte setups in the highmem case.  To protect the virtual mapping so
      created, interrupts must be disabled currently up to a page worth of
      address range.
      
      The interrupt disabling is done in a way to minimize the overhead within
      the inner loop.  The alternative would consist in separate code for
      the highmem and non highmem compilation which is less preferable.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      3902a15e
    • N
      [ARM] Feroceon: add highmem support to L2 cache handling code · 1bb77267
      Nicolas Pitre 提交于
      The choice is between looping over the physical range and performing
      single cache line operations, or to map highmem pages somewhere, as
      cache range ops are possible only on virtual addresses.
      
      Because L2 range ops are much faster, we go with the later by factoring
      the physical-to-virtual address conversion and use a fixmap entry for it
      in the HIGHMEM case.
      
      Possible future optimizations to avoid the pte setup cost:
      
       - do the pte setup for highmem pages only
      
       - determine a threshold for doing a line-by-line processing on physical
         addresses when the range is small
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      1bb77267
    • N
      [ARM] make page_to_dma() highmem aware · 58edb515
      Nicolas Pitre 提交于
      If a machine class has a custom __virt_to_bus() implementation then it
      must provide a __arch_page_to_dma() implementation as well which is
      _not_ based on page_address() to support highmem.
      
      This patch fixes existing __arch_page_to_dma() and provide a default
      implementation otherwise.  The default implementation for highmem is
      based on __pfn_to_bus() which is defined only when no custom
      __virt_to_bus() is provided by the machine class.
      
      That leaves only ebsa110 and footbridge which cannot support highmem
      until they provide their own __arch_page_to_dma() implementation.
      But highmem support on those legacy platforms with limited memory is
      certainly not a priority.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      58edb515
    • N
      [ARM] introduce dma_cache_maint_page() · 43377453
      Nicolas Pitre 提交于
      This is a helper to be used by the DMA mapping API to handle cache
      maintenance for memory identified by a page structure instead of a
      virtual address.  Those pages may or may not be highmem pages, and
      when they're highmem pages, they may or may not be virtually mapped.
      When they're not mapped then there is no L1 cache to worry about. But
      even in that case the L2 cache must be processed since unmapped highmem
      pages can still be L2 cached.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      43377453
    • N
      3835f6cb
    • N
      [ARM] kmap support · d73cd428
      Nicolas Pitre 提交于
      The kmap virtual area borrows a 2MB range at the top of the 16MB area
      below PAGE_OFFSET currently reserved for kernel modules and/or the
      XIP kernel.  This 2MB corresponds to the range covered by 2 consecutive
      second-level page tables, or a single pmd entry as seen by the Linux
      page table abstraction.  Because XIP kernels are unlikely to be seen
      on systems needing highmem support, there shouldn't be any shortage of
      VM space for modules (14 MB for modules is still way more than twice the
      typical usage).
      
      Because the virtual mapping of highmem pages can go away at any moment
      after kunmap() is called on them, we need to bypass the delayed cache
      flushing provided by flush_dcache_page() in that case.
      
      The atomic kmap versions are based on fixmaps, and
      __cpuc_flush_dcache_page() is used directly in that case.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      d73cd428
    • N
      [ARM] fixmap support · 5f0fbf9e
      Nicolas Pitre 提交于
      This is the minimum fixmap interface expected to be implemented by
      architectures supporting highmem.
      
      We have a second level page table already allocated and covering
      0xfff00000-0xffffffff because the exception vector page is located
      at 0xffff0000, and various cache tricks already use some entries above
      0xffff0000.  Therefore the PTEs covering 0xfff00000-0xfffeffff are free
      to be used.
      
      However the XScale cache flushing code already uses virtual addresses
      between 0xfffe0000 and 0xfffeffff.
      
      So this reserves the 0xfff00000-0xfffdffff range for fixmap stuff.
      
      The Documentation/arm/memory.txt information is updated accordingly,
      including the information about the actual top of DMA memory mapping
      region which didn't match the code.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      5f0fbf9e
  2. 13 3月, 2009 3 次提交
  3. 07 3月, 2009 1 次提交
  4. 06 3月, 2009 1 次提交
  5. 05 3月, 2009 5 次提交
  6. 04 3月, 2009 1 次提交
  7. 03 3月, 2009 2 次提交
  8. 28 2月, 2009 1 次提交
    • D
      usb: musb: make Davinci *work* in mainline · 34f32c97
      David Brownell 提交于
      Now that the musb build fixes for DaVinci got merged (RC3?), kick in
      the other bits needed to get it finally *working* in mainline:
      
       - Use clk_enable()/clk_disable() ... the "always enable USB clocks"
         code this originally relied on has since been removed.
      
       - Initialize the USB device only after the relevant I2C GPIOs are
         available, so the host side can properly enable VBUS.
      
       - Tweak init sequencing to cope with mainline's relatively late init
         of the I2C system bus for power switches, transceivers, and so on.
      
      Sanity tested on DM6664 EVM for host and peripheral modes; that system
      won't boot with CONFIG_PM enabled, so OTG can't yet be tested.  Also
      verified on OMAP3.
      
      (Unrelated:  correct the MODULE_PARM_DESC spelling of musb_debug.)
      Signed-off-by: NDavid Brownell <dbrownell@users.sourceforge.net>
      Cc: Felipe Balbi <me@felipebalbi.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      34f32c97
  9. 27 2月, 2009 12 次提交
  10. 25 2月, 2009 2 次提交
  11. 23 2月, 2009 1 次提交
  12. 20 2月, 2009 1 次提交
  13. 19 2月, 2009 1 次提交