1. 29 11月, 2008 1 次提交
  2. 07 11月, 2008 1 次提交
  3. 05 11月, 2008 1 次提交
  4. 28 10月, 2008 2 次提交
  5. 23 10月, 2008 6 次提交
  6. 11 10月, 2008 1 次提交
  7. 25 9月, 2008 1 次提交
    • A
      drivers/net/mlx4/alloc.c needs mm.h · 6526128c
      Andrew Morton 提交于
      sparc32 allmodconfig with linux-next:
      
      drivers/net/mlx4/alloc.c: In function 'mlx4_buf_alloc':
      drivers/net/mlx4/alloc.c:164: error: 'PAGE_KERNEL' undeclared (first use in this function)
      drivers/net/mlx4/alloc.c:164: error: (Each undeclared identifier is reported only once
      drivers/net/mlx4/alloc.c:164: error: for each function it appears in.)
      
      this is due to some header shuffle in linux-next.  I didn't look to see what
      it was.  I'd sugges that this patch be merged ahead of a linux-next merge to
      avoid bisection breaks.
      
      We strictly only need asm/pgtable.h, but going direct to asm includes always
      seems grubby.
      
      Cc: Jeff Garzik <jeff@garzik.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NJeff Garzik <jgarzik@redhat.com>
      6526128c
  8. 16 9月, 2008 1 次提交
  9. 03 9月, 2008 1 次提交
  10. 27 7月, 2008 1 次提交
    • F
      dma-mapping: add the device argument to dma_mapping_error() · 8d8bb39b
      FUJITA Tomonori 提交于
      Add per-device dma_mapping_ops support for CONFIG_X86_64 as POWER
      architecture does:
      
      This enables us to cleanly fix the Calgary IOMMU issue that some devices
      are not behind the IOMMU (http://lkml.org/lkml/2008/5/8/423).
      
      I think that per-device dma_mapping_ops support would be also helpful for
      KVM people to support PCI passthrough but Andi thinks that this makes it
      difficult to support the PCI passthrough (see the above thread).  So I
      CC'ed this to KVM camp.  Comments are appreciated.
      
      A pointer to dma_mapping_ops to struct dev_archdata is added.  If the
      pointer is non NULL, DMA operations in asm/dma-mapping.h use it.  If it's
      NULL, the system-wide dma_ops pointer is used as before.
      
      If it's useful for KVM people, I plan to implement a mechanism to register
      a hook called when a new pci (or dma capable) device is created (it works
      with hot plugging).  It enables IOMMUs to set up an appropriate
      dma_mapping_ops per device.
      
      The major obstacle is that dma_mapping_error doesn't take a pointer to the
      device unlike other DMA operations.  So x86 can't have dma_mapping_ops per
      device.  Note all the POWER IOMMUs use the same dma_mapping_error function
      so this is not a problem for POWER but x86 IOMMUs use different
      dma_mapping_error functions.
      
      The first patch adds the device argument to dma_mapping_error.  The patch
      is trivial but large since it touches lots of drivers and dma-mapping.h in
      all the architecture.
      
      This patch:
      
      dma_mapping_error() doesn't take a pointer to the device unlike other DMA
      operations.  So we can't have dma_mapping_ops per device.
      
      Note that POWER already has dma_mapping_ops per device but all the POWER
      IOMMUs use the same dma_mapping_error function.  x86 IOMMUs use device
      argument.
      
      [akpm@linux-foundation.org: fix sge]
      [akpm@linux-foundation.org: fix svc_rdma]
      [akpm@linux-foundation.org: build fix]
      [akpm@linux-foundation.org: fix bnx2x]
      [akpm@linux-foundation.org: fix s2io]
      [akpm@linux-foundation.org: fix pasemi_mac]
      [akpm@linux-foundation.org: fix sdhci]
      [akpm@linux-foundation.org: build fix]
      [akpm@linux-foundation.org: fix sparc]
      [akpm@linux-foundation.org: fix ibmvscsi]
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: Muli Ben-Yehuda <muli@il.ibm.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Avi Kivity <avi@qumranet.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8d8bb39b
  11. 26 7月, 2008 1 次提交
  12. 25 7月, 2008 1 次提交
    • A
      PAGE_ALIGN(): correctly handle 64-bit values on 32-bit architectures · 27ac792c
      Andrea Righi 提交于
      On 32-bit architectures PAGE_ALIGN() truncates 64-bit values to the 32-bit
      boundary. For example:
      
      	u64 val = PAGE_ALIGN(size);
      
      always returns a value < 4GB even if size is greater than 4GB.
      
      The problem resides in PAGE_MASK definition (from include/asm-x86/page.h for
      example):
      
      #define PAGE_SHIFT      12
      #define PAGE_SIZE       (_AC(1,UL) << PAGE_SHIFT)
      #define PAGE_MASK       (~(PAGE_SIZE-1))
      ...
      #define PAGE_ALIGN(addr)       (((addr)+PAGE_SIZE-1)&PAGE_MASK)
      
      The "~" is performed on a 32-bit value, so everything in "and" with
      PAGE_MASK greater than 4GB will be truncated to the 32-bit boundary.
      Using the ALIGN() macro seems to be the right way, because it uses
      typeof(addr) for the mask.
      
      Also move the PAGE_ALIGN() definitions out of include/asm-*/page.h in
      include/linux/mm.h.
      
      See also lkml discussion: http://lkml.org/lkml/2008/6/11/237
      
      [akpm@linux-foundation.org: fix drivers/media/video/uvc/uvc_queue.c]
      [akpm@linux-foundation.org: fix v850]
      [akpm@linux-foundation.org: fix powerpc]
      [akpm@linux-foundation.org: fix arm]
      [akpm@linux-foundation.org: fix mips]
      [akpm@linux-foundation.org: fix drivers/media/video/pvrusb2/pvrusb2-dvb.c]
      [akpm@linux-foundation.org: fix drivers/mtd/maps/uclinux.c]
      [akpm@linux-foundation.org: fix powerpc]
      Signed-off-by: NAndrea Righi <righi.andrea@gmail.com>
      Cc: <linux-arch@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      27ac792c
  13. 23 7月, 2008 5 次提交
  14. 15 7月, 2008 2 次提交
  15. 06 5月, 2008 1 次提交
  16. 30 4月, 2008 2 次提交
    • O
      mlx4_core: Avoid recycling old FMR R_Keys too soon · bbdc2821
      Olaf Kirch 提交于
      When a FMR is unmapped, mlx4 resets the map count to 0, and clears the
      upper part of the R_Key which is used as the sequence counter.
      
      This poses a problem for RDS, which uses ib_fmr_unmap as a fence
      operation.  RDS assumes that after issuing an unmap, the old R_Keys
      will be invalid for a "reasonable" period of time. For instance,
      Oracle processes uses shared memory buffers allocated from a pool of
      buffers.  When a process dies, we want to reclaim these buffers -- but
      we must make sure there are no pending RDMA operations to/from those
      buffers.  The only way to achieve that is by using unmap and sync the
      TPT.
      
      However, when the sequence count is reset on unmap, there is a high
      likelihood that a new mapping will be given the same R_Key that was
      issued a few milliseconds ago.
      
      To prevent this, don't reset the sequence count when unmapping a FMR.
      Signed-off-by: NOlaf Kirch <olaf.kirch@oracle.com>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      bbdc2821
    • Y
      mlx4_core: Add a way to set the "collapsed" CQ flag · e463c7b1
      Yevgeny Petrilin 提交于
      Extend the mlx4_cq_resize() API with a way to set the "collapsed" flag
      for the CQ being created.
      Signed-off-by: NYevgeny Petrilin <yevgenyp@mellanox.co.il>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      e463c7b1
  17. 26 4月, 2008 2 次提交
  18. 24 4月, 2008 2 次提交
  19. 17 4月, 2008 8 次提交