- 08 9月, 2012 1 次提交
-
-
由 Hadar Hen Zion 提交于
To allow for usage of the flow steering Firmware structures in more locations over the driver, such as the resource tracker, move them from mcg.c to common header files. Signed-off-by: NHadar Hen Zion <hadarh@mellanox.com> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 8月, 2012 3 次提交
-
-
由 Roland Dreier 提交于
- Use kcalloc() / vzalloc() instead of an extra bitmap_zero(). - Add __GFP_NOWARN to kcalloc() since we'll try vzalloc() if it fails. Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Yishai Hadas 提交于
Fix some issues around int variables used in data structures related to memory registration. Handle int overflow in mlx4_init_icm_table by using a u64 intermediate variable and changing struct mlx4_icm_table num_obj field to be u32. Change some more fields/variables to use u32 instead of int to prevent a case where the variable becomes negative when bit 31 is set. Also subtract log_mtts_per_seg from the exponent when computing num_mtt, since its added later on in that very same code area. This and the previous commit fixes some issues which actually prevent commit db5a7a65 ("mlx4_core: Scale size of MTT table with system RAM") from working. Now, when the number of MTTs is scaled with the size of the RAM we can map up to 8TB. Signed-off-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NJack Morgenstein <jackm@mellanox.com> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Yishai Hadas 提交于
mlx4_buddy_init uses kmalloc() to allocate bitmaps, which fails when the required size is beyond the max supported value (or when memory is too fragmented to handle a huge allocation). Extend this to use use vmalloc() if kmalloc() fails, and take that into account when freeing the bitmaps as well. This fixes a driver load failure when log num mtt is 26 or higher, and is a step in the direction of allowing to register huge amounts of memory on large memory systems. Signed-off-by: NYishai Hadas <yishaih@mellanox.com> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 15 8月, 2012 1 次提交
-
-
由 Julia Lawall 提交于
Convert a 0 error return code to a negative one, as returned elsewhere in the function. A simplified version of the semantic match that finds this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ identifier ret; expression e,e1,e2,e3,e4,x; @@ ( if (\(ret != 0\|ret < 0\) || ...) { ... return ...; } | ret = 0 ) ... when != ret = e1 *x = \(kmalloc\|kzalloc\|kcalloc\|devm_kzalloc\|ioremap\|ioremap_nocache\|devm_ioremap\|devm_ioremap_nocache\)(...); ... when != x = e2 when != ret = e3 *if (x == NULL || ...) { ... when != ret = e4 * return ret; } // </smpl> Signed-off-by: NJulia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 8月, 2012 3 次提交
-
-
由 Yevgeny Petrilin 提交于
Port1=Eth, Port2=IB restriction is no longer required. Having RoCE, there will always rdma port initialized over ConnectX physical port, no matter whether the link layer is IB or Ethernet. So we always have dual port IB device. Signed-off-by: NYevgeny Petrilin <yevgenyp@mellanox.co.il> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yevgeny Petrilin 提交于
Removing the ring->blocked flag, it is redundant and leads to a race: We close the TX queue and then set the "blocked" flag. Between those 2 operations the completion function can check the "blocked" flag, sees that it is 0, and wouldn't open the TX queue. Using netif_tx_queue_stopped to check the state of the queue to avoid this race. Signed-off-by: NYevgeny Petrilin <yevgenyp@mellanox.co.il> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Amir Vadai 提交于
Should NOT check SMAC=DMAC when: 1. loopback is turned on 2. validate_loopback is true. Fixed it accordingly. Signed-off-by: NAmir Vadai <amirv@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 7月, 2012 2 次提交
-
-
由 Amir Vadai 提交于
RFS filter id can't have the special value RPS_NO_FILTER, need to skip it when allocating id's. CC: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: NAmir Vadai <amirv@mellanox.com> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
Currently the mlx4 drivers don't have the necessary callbacks to implement EEH errors detection and recovery, so the PCI layer uses the probe and remove callbacks to try to recover the device after an error on the bus. However, these callbacks have race conditions with the internal catastrophic error recovery functions, which will also detect the error and this can cause the system to crash if both EEH and catas functions try to reset the device. This patch adds the necessary error recovery callbacks and makes sure that the internal catastrophic error functions will not try to reset the device in such scenarios. It also adds some calls to pci_channel_offline() to suppress reads/writes on the bus when the slot cannot accept I/O operations so we prevent unnecessary accesses to the bus and speed up the device removal. Signed-off-by: NKleber Sacilotto de Souza <klebers@linux.vnet.ibm.com> Acked-by: NShlomo Pongratz <shlomop@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 7月, 2012 1 次提交
-
-
In its receive path, mlx4_en driver maps each page chunk that it pushes to the hardware and unmaps it when pushing it up the stack. This limits throughput to about 3Gbps on a Power7 8-core machine. One solution is to map the entire allocated page at once. However, this requires that we keep track of every page fragment we give to a descriptor. We also need to work with the discipline that all fragments will be released (in the sense that it will not be reused by the driver anymore) in the order they are allocated to the driver. This requires that we don't reuse any fragments, every single one of them must be reallocated. We do that by releasing all the fragments that are processed and only after finished processing the descriptors, we start the refill. We also must somehow guarantee that we either refill all fragments in a descriptor or none at all, without resorting to giving up a page fragment that we would have already given. Otherwise, we would break the discipline of only releasing the fragments in the order they were allocated. This has passed page allocation fault injections (restricted to the driver by using required-start and required-end) and device hotplug while 16 TCP streams were able to deliver more than 9Gbps. Signed-off-by: NThadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 19 7月, 2012 3 次提交
-
-
由 Amir Vadai 提交于
Use RFS infrastructure and flow steering in HW to keep CPU affinity of rx interrupts and application per TCP stream. A flow steering filter is added to the HW whenever the RFS ndo callback is invoked by core networking code. Because the invocation takes place in interrupt context, the actual setup of HW is done using workqueue. Whenever new filter is added, the driver checks for expiry of existing filters. Since there's window in time between the point where the core RFS code invoked the ndo callback, to the point where the HW is configured from the workqueue context, the 2nd, 3rd etc packets from that stream will cause the net core to invoke the callback again and again. To prevent inefficient/double configuration of the HW, the filters are kept in a database which is indexed using hash function to enable fast access. Signed-off-by: NAmir Vadai <amirv@mellanox.com> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Amir Vadai 提交于
Enable callers of mlx4_assign_eq to supply a pointer to cpu_rmap. If supplied, the assigned IRQ is tracked using rmap infrastructure. Signed-off-by: NAmir Vadai <amirv@mellanox.com> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Amir Vadai 提交于
Define this macro is one common place instead of duplicating it over the code Signed-off-by: NAmir Vadai <amirv@mellanox.com> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 7月, 2012 2 次提交
-
-
由 Dan Carpenter 提交于
We dereferenced "mclist" after the kfree(). Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dan Carpenter 提交于
This should be ">=" here instead of ">". MLX4_NET_TRANS_RULE_NUM is 6. We use "spec->id" as an array offset into the __rule_hw_sz[] and __sw_id_hw[] arrays which have 6 elements. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Acked-by: NHadar Hen Zion <hadarh@mellanox.co.il> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 7月, 2012 5 次提交
-
-
由 Jack Morgenstein 提交于
To allow easy paravirtualization of P_Key and GID table sizes, keep paravirtualized sizes in mlx4_dev->caps, but save the actual physical sizes from FW in struct: mlx4_dev->phys_cap. In addition, in SR-IOV mode, do the following: 1. Reduce reported P_Key table size by 1. This is done to reserve the highest P_Key index for internal use, for declaring an invalid P_Key in P_Key paravirtualization. We require a P_Key index which always contain an invalid P_Key value for this purpose (i.e., one which cannot be modified by the subnet manager). The way to do this is to reduce the P_Key table size reported to the subnet manager by 1, so that it will not attempt to access the P_Key at index #127. 2. Paravirtualize the GID table size to 1. Thus, each guest sees only a single GID (at its paravirtualized index 0). In addition, since we are paravirtualizing the GID table size to 1, we add paravirtualization of the master GID event here (i.e., we do not do ib_dispatch_event() for the GUID change event on the master, since its (only) GUID never changes). Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Jack Morgenstein 提交于
Modify mlx4_dev_cap to allow IB support when SR-IOV is active. Modify mlx4_slave_cap to set the "rdma-supported" bit in its flags area, and pass that to the guests (this is done in QUERY_FUNC_CAP and its wrapper). However, we don't activate IB support quite yet -- we leave the error return at the start of mlx4_ib_add in the mlx4_ib driver. In addition, set "protected fmr supported" bit to zero in the QUERY_FUNC_CAP wrapper. Finally, in the QUERY_FUNC_CAP wrapper, we needed to add code which checks for the port type (IB or Ethernet). Previously, this was not an issue, since only Ethernet ports were supported. Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Jack Morgenstein 提交于
The SR-IOV special QP tunneling mechanism uses proxy special QPs (instead of the real special QPs) for MADs on guests. These proxy QPs send their packets to a "tunnel" QP owned by the master. The master then forwards the MAD (after any required paravirtualization) to the real special QP, which sends out the MAD. For security reasons (i.e., to prevent guests from sending MADs to tunnel QPs belonging to other guests), each proxy-tunnel QP pair is assigned a unique, reserved, Q_Key. These Q_Keys are available only for proxy and tunnel QPs -- if the guest tries to use these Q_Keys with other QPs, it will fail. This patch introduces a mechanism for reserving a block of 64K Q_Keys for proxy/tunneling use. The patch introduces also two new fields into mlx4_dev: base_sqpn and base_tunnel_sqpn. In SR-IOV mode, the QP numbers for the "real," proxy, and tunnel sqps are added to the reserved QPN area (so that they will not change). There are 8 special QPs per port in the HCA, and each of them is assigned both a proxy and a tunnel QP, for each VF and for the PF as well in SR-IOV mode. The QPNs for these QPs are arranged as follows: 1. The real SQP numbers (8) 2. The proxy SQPs (8 * (max number of VFs + max number of PFs) 3. The tunnel SQPs (8 * (max number of VFs + max number of PFs) To support these QPs, two new fields are added to struct mlx4_dev: base_sqp: this is the QP number of the first of the real SQPs base_tunnel_sqp: this is the qp number of the first qp in the tunnel sqp region. (On guests, this is the first tunnel sqp of the 8 which are assigned to that guest). In addition, in SR-IOV mode, sqp_start is the number of the first proxy SQP in the proxy SQP region. (In guests, this is the first proxy SQP of the 8 which are assigned to that guest) Note that in non-SR-IOV mode, there are no proxies and no tunnels. In this case, sqp_start is set to sqp_base -- which minimizes code changes. Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Dotan Barak 提交于
In mlx4_init_icm_table(), free the allocated table if we failed to allocate memory to its entries. Signed-off-by: NDotan Barak <dotanb@dev.mellanox.co.il> Reviewed-by: NYevgeny Petrilin <yevgenyp@mellanox.co.il> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Dotan Barak 提交于
Spotted four duplicate declarations in icm.h, remove them. Signed-off-by: NDotan Barak <dotanb@dev.mellanox.co.il> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 11 7月, 2012 2 次提交
-
-
由 Jack Morgenstein 提交于
With IB SR-IOV, each slave has its own separate copy of the port capabilities flags. For example, the master can run a subnet manager (which causes the IsSM bit to be set in the master's port capabilities) without affecting the port capabilities seen by the slaves (the IsSM bit will be seen as cleared in the slaves). Also add a static inline mlx4_master_func_num() to enhance readability of the code. Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Jack Morgenstein 提交于
The port management change event can replace smp_snoop. If the capability bit for this event is set in dev-caps, the event is used (by the driver setting the PORT_MNG_CHG_EVENT bit in the async event mask in the MAP_EQ fw command). In this case, when the driver passes incoming SMP PORT_INFO SET mads to the FW, the FW generates port management change events to signal any changes to the driver. If the FW generates these events, smp_snoop shouldn't be invoked in ib_process_mad(), or duplicate events will occur (once from the FW-generated event, and once from smp_snoop). In the case where the FW does not generate port management change events smp_snoop needs to be invoked to create these events. The flow in smp_snoop has been modified to make use of the same procedures as in the fw-generated-event event case to generate the port management events (LID change, Client-rereg, Pkey change, and/or GID change). Port management change event handling required changing the mlx4_ib_event and mlx4_dispatch_event prototypes; the "param" argument (last argument) had to be changed to unsigned long in order to accomodate passing the EQE pointer. We also needed to move the definition of struct mlx4_eqe from net/mlx4.h to file device.h -- to make it available to the IB driver, to handle port management change events. Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 09 7月, 2012 1 次提交
-
-
由 Jack Morgenstein 提交于
Currently, VFs have 0 in their dev->caps.function field. This is a valid pci id (usually of the PF). Instead, pass an invalid PCI id to the VF via QUERY_FW, so that if the value gets accessed in the VF driver, we'll catch the problem. Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 08 7月, 2012 10 次提交
-
-
由 Hadar Hen Zion 提交于
The drop action is implemented by allocating a QP and keeping it in a reset state such that the HW drops any packets which are steered to that QP. When a drop action is requested, we attach the relevant flow to that QP. Sign-off-by: NHadar Hen Zion <hadarh@mellanox.co.il> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Hadar Hen Zion 提交于
Implement the ethtool APIs for attaching L2/L3/L4 based flow steering rules to the netdevice RX rings. Added set_rxnfc callback and enhanced the existing get_rxnfc callback. Signed-off-by: NHadar Hen Zion <hadarh@mellanox.co.il> Signed-off-by: NAmir Vadai <amirv@mellanox.com> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Hadar Hen Zion 提交于
The device managed flow steering API has three promiscuous modes: 1. Uplink - captures all the packets that arrive to the port. 2. Allmulti - captures all multicast packets arriving to the port. 3. Function port - for future use, this mode is not implemented yet. Use these modes with the flow_attach and flow_detach firmware commands according to the promiscuous state of the netdevice. Signed-off-by: NHadar Hen Zion <hadarh@mellanox.co.il> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Hadar Hen Zion 提交于
As with other device resources, the resource tracker is needed for supporting device managed flow steering rules under SRIOV: make sure virtual functions delete only rules created by them, and clean all rules attached by a crashed VF. Signed-off-by: NHadar Hen Zion <hadarh@mellanox.co.il> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Hadar Hen Zion 提交于
The driver is modified to support three operation modes. If supported by firmware use the device managed flow steering API, that which we call device managed steering mode. Else, if the firmware supports the B0 steering mode use it, and finally, if none of the above, use the A0 steering mode. When the steering mode is device managed, the code is modified such that L2 based rules set by the mlx4_en driver for Ethernet unicast and multicast, and the IB stack multicast attach calls done through the mlx4_ib driver are all routed to use the device managed API. When attaching rule using device managed flow steering API, the firmware returns a 64 bit registration id, which is to be provided during detach. Currently the firmware is always programmed during HCA initialization to use standard L2 hashing. Future work should be done to allow configuring the flow-steering hash function with common, non proprietary means. Signed-off-by: NHadar Hen Zion <hadarh@mellanox.co.il> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Hadar Hen Zion 提交于
Add support for firmware commands to attach/detach a new device managed steering mode. Such network steering rules allow the user to provide an L2/L3/L4 flow specification to the firmware and have the device to steer traffic that matches that specification to the provided QP. Signed-off-by: NHadar Hen Zion <hadarh@mellanox.co.il> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Hadar Hen Zion 提交于
Instead of checking the firmware supported steering mode in various places in the code, add a dedicated field in the mlx4 device capabilities structure which is written once during the initialization flow and read across the code. This also set the grounds for add new steering modes. Currently two modes are supported, and are named after the ConnectX HW versions A0 and B0. A0 steering uses mac_index, vlan_index and priority to steer traffic into pre-defined range of QPs. B0 steering uses Ethernet L2 hashing rules and is enabled only if the firmware supports both unicast and multicast B0 steering, The current steering modes are relevant for Ethernet traffic only, such that Infiniband steering remains untouched. Signed-off-by: NHadar Hen Zion <hadarh@mellanox.co.il> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yevgeny Petrilin 提交于
Currently, for every change in the net device multicast list, the driver detaches all the addresses from the HW device, and then attaches the updated list. This behavior is wrong from two aspects: first, it causes a load of firmware commands and second, there is period of time where the correct addresses are not attached, which turned into packet loss. To improve - a copy of the multicast list is saved by the driver. For every change in the multicast list, the multicast list copy is used to find the delta between those two lists and add or remove multicast addresses as needed. Reported-by: NShawn Bohrer <sbohrer@rgmadvisors.com> Cc: Shawn Bohrer <sbohrer@rgmadvisors.com> Signed-off-by: NHadar Hen Zion <hadarh@mellanox.co.il> Signed-off-by: NYevgeny Petrilin <yevgenyp@mellanox.co.il> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Hadar Hen Zion 提交于
Currently the IDs used by the resource tracker are of type u32, so far this was ok since all the different resources we were tracking could be encoded in 32bit. As a preparation step for tracking of resources whose IDs need > 32 bits such as network flow steering rules, who are 64 bit in size, move to use 64 bit based resource IDs. Signed-off-by: NHadar Hen Zion <hadarh@mellanox.co.il> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Hadar Hen Zion 提交于
Change the data structure used for managing the SRIOV resource tracking mechanism from radix tree to red-black tree. This is preparation step for supporting resource IDs which are 64bit long, such as network flow steering rules. Such IDs can't be used as radix-tree keys on 32bit architectures and hence the reason for the change. Signed-off-by: NHadar Hen Zion <hadarh@mellanox.co.il> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 7月, 2012 1 次提交
-
-
由 Yuval Mintz 提交于
Signed-off-by: NYuval Mintz <yuvalmin@broadcom.com> Signed-off-by: NEilon Greenstein <eilong@broadcom.com> Cc: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 6月, 2012 3 次提交
-
-
由 Yevgeny Petrilin 提交于
Add a missing resource release in ring cleanup. Not doing this leaves a range of QPs that are being reserved, and no one can use them. Signed-off-by: NYevgeny Petrilin <yevgenyp@mellanox.co.il> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yevgeny Petrilin 提交于
Fix a crash at the error flow of NOP command which caused the driver to try and use a completion vector which wasn't allocated. Signed-off-by: NYevgeny Petrilin <yevgenyp@mellanox.co.il> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yevgeny Petrilin 提交于
Set valid port parameters: MTU and flow control configuration when configuring the port during HW device initialization, prior to the net device open() being called. Using invalid parameters (such as all zeros) could lead to bad firmware behavior. Signed-off-by: NYevgeny Petrilin <yevgenyp@mellanox.co.il> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 6月, 2012 2 次提交
-
-
由 Jack Morgenstein 提交于
Commit 096335b3 ("mlx4_core: Allow dynamic MTU configuration for IB ports") modifies the port VL setting. This exposes a bug in mlx4_common_set_port(), where the VL cap value passed in (inside the command mailbox) is incorrectly zeroed-out: mlx4_SET_PORT modifies the VL_cap field (byte 3 of the mailbox). Since the SET_PORT command is paravirtualized on the master as well as on the slaves, mlx4_SET_PORT_wrapper() is invoked on the master. This calls mlx4_common_set_port() where mailbox byte 3 gets overwritten by code which should only set a single bit in that byte (for the reset qkey counter flag) -- but instead overwrites the entire byte. The result is that when running in SR-IOV mode, the VL_cap will be set to zero -- fix this. Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Joe Perches 提交于
Adding casts of objects to the same type is unnecessary and confusing for a human reader. For example, this cast: int y; int *p = (int *)&y; I used the coccinelle script below to find and remove these unnecessary casts. I manually removed the conversions this script produces of casts with __force, __iomem and __user. @@ type T; T *p; @@ - (T *)p + p A function in atl1e_main.c was passed a const pointer when it actually modified elements of the structure. Change the argument to a non-const pointer. A function in stmmac needed a __force to avoid a sparse warning. Added it. Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-