- 06 9月, 2009 1 次提交
-
-
由 Roland Dreier 提交于
Lots of mlx4 files with no function annotations included <linux/init.h> for no reason. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 23 6月, 2009 1 次提交
-
-
由 Roland Dreier 提交于
Commit 5d23a1d2 ("net: replace dma_sync_single with dma_sync_single_for_cpu") replaced uses of the deprectated function dma_sync_single() with calls to dma_sync_single_for_cpu(). However, to be correct, the code should do a sync for_cpu() before touching the memory and for_device() after it's done. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 29 5月, 2009 1 次提交
-
-
由 FUJITA Tomonori 提交于
This replaces dma_sync_single() with dma_sync_single_for_cpu() because dma_sync_single() is an obsolete API; include/linux/dma-mapping.h says: /* Backwards compat, remove in 2.7.x */ #define dma_sync_single dma_sync_single_for_cpu #define dma_sync_sg dma_sync_sg_for_cpu Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 5月, 2009 1 次提交
-
-
由 Eli Cohen 提交于
The current MTT allocator uses kmalloc() to allocate a buffer for its buddy allocator, and thus is limited in the amount of MTT segments that it can control. As a result, the size of memory that can be registered is limited too. This patch uses a module parameter to control the number of MTT entries that each segment represents, allowing more memory to be registered with the same number of segments. Signed-off-by: NEli Cohen <eli@mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 23 10月, 2008 1 次提交
-
-
由 Yevgeny Petrilin 提交于
For ethernet support, we need to reserve QPs for the ethernet and fibre channel driver. The QPs are reserved at the end of the QP table. (This way we assure that they are aligned to their size) We need to consider these reserved ranges in bitmap creation, so we extend the mlx4 bitmap utility functions to allow reserved ranges at both the bottom and the top of the range. Signed-off-by: NYevgeny Petrilin <yevgenyp@mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 16 9月, 2008 1 次提交
-
-
由 Vladimir Sokolovsky 提交于
Byte swap the addresses in the page list for fast register work requests to big endian to match what the HCA expectx. Also, the addresses must have the "present" bit set so that the HCA knows it can access them. Otherwise the HCA will fault the first time it accesses the memory region. Signed-off-by: NVladimir Sokolovsky <vlad@mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 03 9月, 2008 1 次提交
-
-
由 Vladimir Sokolovsky 提交于
Set the RAE (remote access enable) bit and correctly initialize the MTT size in MPT entries being set up for fast register memory regions. Otherwise the callers can't enable remote access and in fact can't fast register at all (since the HCA will think no MTT entries are allocated). Signed-off-by: NVladimir Sokolovsky <vlad@mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 26 7月, 2008 1 次提交
-
-
由 Jack Morgenstein 提交于
Update existing Mellanox copyright lines to 2008, and add such lines to files where they are missing. Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 23 7月, 2008 2 次提交
-
-
由 Roland Dreier 提交于
Add support for the following operations to mlx4 when device firmware supports them: - Send with invalidate and local invalidate send queue work requests; - Allocate/free fast register MRs; - Allocate/free fast register MR page lists; - Fast register MR send queue work requests; - Local DMA L_Key. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roland Dreier 提交于
MTT entries are allocated with a buddy allocator, which just keeps bitmaps for each level of the buddy table. However, all free space starts out at the highest order, and small allocations start scanning from the lowest order. When the lowest order tables have no free space, this can lead to scanning potentially millions of bits before finding a free entry at a higher order. We can avoid this by just keeping a count of how many free entries each order has, and skipping the bitmap scan when an order is completely empty. This provides a nice performance boost for a negligible increase in memory usage. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 06 5月, 2008 1 次提交
-
-
由 Oren Duer 提交于
Don't hard code a test against a minimum page shift of 12, since the device may support smaller pages. Test against the actual smallest page size from the device capabilities. Signed-off-by: NOren Duer <oren@mellanox.co.il> Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 30 4月, 2008 1 次提交
-
-
由 Olaf Kirch 提交于
When a FMR is unmapped, mlx4 resets the map count to 0, and clears the upper part of the R_Key which is used as the sequence counter. This poses a problem for RDS, which uses ib_fmr_unmap as a fence operation. RDS assumes that after issuing an unmap, the old R_Keys will be invalid for a "reasonable" period of time. For instance, Oracle processes uses shared memory buffers allocated from a pool of buffers. When a process dies, we want to reclaim these buffers -- but we must make sure there are no pending RDMA operations to/from those buffers. The only way to achieve that is by using unmap and sync the TPT. However, when the sequence count is reset on unmap, there is a high likelihood that a new mapping will be given the same R_Key that was issued a few milliseconds ago. To prevent this, don't reset the sequence count when unmapping a FMR. Signed-off-by: NOlaf Kirch <olaf.kirch@oracle.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 15 2月, 2008 1 次提交
-
-
由 Jack Morgenstein 提交于
mlx4_table_find (for FMR MPTs) requires that ICM memory already be mapped. Before this fix, FMR allocation depended on ICM memory already being mapped for the MPT entry. If all currently mapped entries are taken, the find operation fails (even if the MPT ICM table still had more entries, which were just not mapped yet). This fix moves the mpt find operation to fmr_enable, to guarantee that any required ICM memory mapping has already occurred. Found by Oren Duer of Mellanox. Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 07 2月, 2008 1 次提交
-
-
由 Roland Dreier 提交于
Now that struct mlx4_buf.u is a struct instead of a union because of the vmap() changes, there's no point in having a struct at all. So move .direct and .page_list directly into struct mlx4_buf and get rid of a bunch of unnecessary ".u"s. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 05 2月, 2008 1 次提交
-
-
由 Roland Dreier 提交于
Commit 3d73c288 ("mlx4_core: Fix section mismatches") fixed some of the section mismatches introduced when error recovery was added, but there were still more cases of errory recovery code calling into __devinit code from regular .text. Fix this by getting rid of the now-incorrect __devinit annotations. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 11 10月, 2007 1 次提交
-
-
由 Roland Dreier 提交于
Commit ee49bd93 ("mlx4_core: Reset device when internal error is detected") introduced some section mismatch problems when CONFIG_HOTPLUG=n, because the error recovery code tears down and reinitializes the device after everything is loaded, which ends up calling into lots of code marked __devinit and __devexit from regular .text. Fix this by getting rid of these now-incorrect section markers. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 10 10月, 2007 3 次提交
-
-
由 Jack Morgenstein 提交于
Implement FMRs for mlx4. This is an adaptation of code from mthca. Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NMichael S. Tsirkin <mst@dev.mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Jack Morgenstein 提交于
Write MTT entries directly to ICM from the driver (eliminating use of WRITE_MTT command). This reduces the number of FW commands needed to register an MR by at least a factor of 2 and speeds up memory registration significantly. This code will also be used to implement FMRs. Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NMichael S. Tsirkin <mst@dev.mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Roland Dreier 提交于
Taking ilog2(dev->caps.reserved_mtts) to find out the order to pass to the MTT buddy allocator will do the wrong thing if reserved_mtts is ever not a power of 2. Be safe and use fls(dev->caps.reserved_mtts - 1). Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 28 7月, 2007 1 次提交
-
-
由 Jack Morgenstein 提交于
mlx4_mr_alloc() doesn't actually allocate mr (it just initializes the pointer that the caller passes in), so it shouldn't free it if an error occurs. Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 08 6月, 2007 1 次提交
-
-
由 Jack Morgenstein 提交于
If a dMPT entry has the PA flag (direct physical address) set, then the (unused) MTT base address field has to be set to 0. Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
- 09 5月, 2007 1 次提交
-
-
由 Roland Dreier 提交于
Add an InfiniBand driver for Mellanox ConnectX adapters. Because these adapters can also be used as ethernet NICs and Fibre Channel HBAs, the driver is split into two modules: mlx4_core: Handles low-level things like device initialization and processing firmware commands. Also controls resource allocation so that the InfiniBand, ethernet and FC functions can share a device without stepping on each other. mlx4_ib: Handles InfiniBand-specific things; plugs into the InfiniBand midlayer. Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-