- 09 1月, 2017 18 次提交
-
-
由 David Howells 提交于
A static checker warning occurs in the AFS filesystem: fs/afs/cmservice.c:155 SRXAFSCB_CallBack() error: dereferencing freed memory 'call' due to the reply being sent before we access the server it points to. The act of sending the reply causes the call to be freed if an error occurs (but not if it doesn't). On top of this, the lifetime handling of afs_call structs is fragile because they get passed around through workqueues without any sort of refcounting. Deal with the issues by: (1) Fix the maybe/maybe not nature of the reply sending functions with regards to whether they release the call struct. (2) Refcount the afs_call struct and sort out places that need to get/put references. (3) Pass a ref through the work queue and release (or pass on) that ref in the work function. Care has to be taken because a work queue may already own a ref to the call. (4) Do the cleaning up in the put function only. (5) Simplify module cleanup by always incrementing afs_outstanding_calls whenever a call is allocated. (6) Set the backlog to 0 with kernel_listen() at the beginning of the process of closing the socket to prevent new incoming calls from occurring and to remove the contribution of preallocated calls from afs_outstanding_calls before we wait on it. A tracepoint is also added to monitor the afs_call refcount and lifetime. Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDavid Howells <dhowells@redhat.com> Fixes: 08e0e7c8: "[AF_RXRPC]: Make the in-kernel AFS filesystem use AF_RXRPC."
-
由 David Howells 提交于
Allow listen() with a backlog of 0 to be used to disable listening on an AF_RXRPC socket. This also releases any preallocation, thereby making it easier for a kernel service to account for all allocated call structures when shutting down the service. The socket cannot thereafter have listening reenabled, but must rather be closed and reopened. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
The afs_wait_mode struct isn't really necessary. Client calls only use one of a choice of two (synchronous or the asynchronous) and incoming calls don't use the wait at all. Replace with a boolean parameter. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David Howells 提交于
Add three tracepoints to the AFS filesystem: (1) The afs_recv_data tracepoint logs data segments that are extracted from the data received from the peer through afs_extract_data(). (2) The afs_notify_call tracepoint logs notification from AF_RXRPC of data coming in to an asynchronous call. (3) The afs_cb_call tracepoint logs incoming calls that have had their operation ID extracted and mapped into a supported cache manager service call. To make (3) work, the name strings in the afs_call_type struct objects have to be annotated with __tracepoint_string. This is done with the CM_NAME() macro. Further, the AFS call state enum needs a name so that it can be used to declare parameter types. Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
由 David S. Miller 提交于
Willem de Bruijn says: ==================== convert tc_verd to integer bitfields The skb tc_verd field takes up two bytes but uses far fewer bits. Convert the remaining use cases to bitfields that fit in existing holes (depending on config options) and potentially save the two bytes in struct sk_buff. This patchset is based on an earlier set by Florian Westphal and its discussion (http://www.spinics.net/lists/netdev/msg329181.html). Patches 1 and 2 are low hanging fruit: removing the last traces of data that are no longer stored in tc_verd. Patches 3 and 4 convert tc_verd to individual bitfields (5 bits). Patch 5 reduces TC_AT to a single bitfield, as AT_STACK is not valid here (unlike in the case of TC_FROM). Patch 6 changes TC_FROM to two bitfields with clearly defined purpose. It may be possible to reduce storage further after this initial round. If tc_skip_classify is set only by IFB, testing skb_iif may suffice. The L2 header pushing/popping logic can perhaps be shared with AF_PACKET, which currently not pkt_type for the same purpose. Changes: RFC -> v1 - (patch 3): remove no longer needed label in tfc_action_exec - (patch 5): set tc_at_ingress at the same points as existing SET_TC_AT calls Tested ingress mirred + netem + ifb: ip link set dev ifb0 up tc qdisc add dev eth0 ingress tc filter add dev eth0 parent ffff: \ u32 match ip dport 8000 0xffff \ action mirred egress redirect dev ifb0 tc qdisc add dev ifb0 root netem delay 1000ms nc -u -l 8000 & ssh $otherhost nc -u $host 8000 Tested egress mirred: ip link add veth1 type veth peer name veth2 ip link set dev veth1 up ip link set dev veth2 up tcpdump -n -i veth2 udp and dst port 8000 & tc qdisc add dev eth0 root handle 1: prio tc filter add dev eth0 parent 1:0 \ u32 match ip dport 8000 0xffff \ action mirred egress redirect dev veth1 tc qdisc add dev veth1 root netem delay 1000ms nc -u $otherhost 8000 Tested ingress mirred: ip link add veth1 type veth peer name veth2 ip link add veth3 type veth peer name veth4 ip netns add ns0 ip netns add ns1 for i in 1 2 3 4; do \ NS=ns$((${i}%2)); \ ip link set dev veth${i} netns ${NS}; \ ip netns exec ${NS} \ ip addr add dev veth${i} 192.168.1.${i}/24; \ ip netns exec ${NS} \ ip link set dev veth${i} up; \ done ip netns exec ns0 tc qdisc add dev veth2 ingress ip netns exec ns0 \ tc filter add dev veth2 parent ffff: \ u32 match ip dport 8000 0xffff \ action mirred ingress redirect dev veth4 ip netns exec ns0 \ tcpdump -n -i veth4 udp and dst port 8000 & ip netns exec ns1 \ nc -u 192.168.1.2 8000 ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Willem de Bruijn 提交于
The tc_from field fulfills two roles. It encodes whether a packet was redirected by an act_mirred device and, if so, whether act_mirred was called on ingress or egress. Split it into separate fields. The information is needed by the special IFB loop, where packets are taken out of the normal path by act_mirred, forwarded to IFB, then reinjected at their original location (ingress or egress) by IFB. The IFB device cannot use skb->tc_at_ingress, because that may have been overwritten as the packet travels from act_mirred to ifb_xmit, when it passes through tc_classify on the IFB egress path. Cache this value in skb->tc_from_ingress. That field is valid only if a packet arriving at ifb_xmit came from act_mirred. Other packets can be crafted to reach ifb_xmit. These must be dropped. Set tc_redirected on redirection and drop all packets that do not have this bit set. Both fields are set only on cloned skbs in tc actions, so original packet sources do not have to clear the bit when reusing packets (notably, pktgen and octeon). Signed-off-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Willem de Bruijn 提交于
Field tc_at is used only within tc actions to distinguish ingress from egress processing. A single bit is sufficient for this purpose. Signed-off-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Willem de Bruijn 提交于
Extract the remaining two fields from tc_verd and remove the __u16 completely. TC_AT and TC_FROM are converted to equivalent two-bit integer fields tc_at and tc_from. Where possible, use existing helper skb_at_tc_ingress when reading tc_at. Introduce helper skb_reset_tc to clear fields. Not documenting tc_from and tc_at, because they will be replaced with single bit fields in follow-on patches. Signed-off-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Willem de Bruijn 提交于
Packets sent by the IFB device skip subsequent tc classification. A single bit governs this state. Move it out of tc_verd in anticipation of removing that __u16 completely. The new bitfield tc_skip_classify temporarily uses one bit of a hole, until tc_verd is removed completely in a follow-up patch. Remove the bit hole comment. It could be 2, 3, 4 or 5 bits long. With that many options, little value in documenting it. Introduce a helper function to deduplicate the logic in the two sites that check this bit. The field tc_skip_classify is set only in IFB on skbs cloned in act_mirred, so original packet sources do not have to clear the bit when reusing packets (notably, pktgen and octeon). Signed-off-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Willem de Bruijn 提交于
This field is no longer kept in tc_verd. Remove it from the global definition of that struct. Signed-off-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Willem de Bruijn 提交于
Remove the last reference to tc_verd's munge and redirect ttl bits. These fields are no longer used. Signed-off-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
While it is useful to know which MDIO device is being registered, demote the dev_info() to a dev_dbg(). Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 stephen hemminger 提交于
In dev_get_stats() the statistic structure storage has already been zeroed. Therefore network drivers do not need to call memset() again. Signed-off-by: NStephen Hemminger <sthemmin@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 stephen hemminger 提交于
The network device operation for reading statistics is only called in one place, and it ignores the return value. Having a structure return value is potentially confusing because some future driver could incorrectly assume that the return value was used. Fix all drivers with ndo_get_stats64 to have a void function. Signed-off-by: NStephen Hemminger <sthemmin@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue由 David S. Miller 提交于
Jeff Kirsher says: ==================== 100GbE Intel Wired LAN Driver Updates 2017-01-08 This series contains updates to fm10k only. Ngai-Mint changes the driver to use the MAC pointer in the fm10k_mac_info structure for fm10k_get_host_state_generic(). Fixed a race condition where the mailbox interrupt request bits can be cleared before being handled causing certain mailbox messages from the PF to be untreated and the PF will enter in some inactive state. Jake removes the typecast of u8 to char, and the extra variable that was created for the typecast. Bumps the driver version. Added back the receive descriptor timestamp value so that applications built on top of the IES API can function properly. Cleaned up the debug statistics flag, since debug statistics were removed and the flag was missed in the removal. Scott limits the DMA sync for CPU to the actual length of the packet, instead of the entire buffer, since the DMA sync occurs every time a packet is received. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
fl4 arg is not used; remove it. Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
ipmr_get_route has 1 caller and the nowait arg is 0. Remove the arg and simplify ipmr_get_route accordingly. Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Derek Chickles 提交于
Because every call to octeon_flush_iq() has a hardcoded 1 for the pending_thresh argument, simplify that function by removing that argument. This avoids one atomic read as well. Signed-off-by: NDerek Chickles <derek.chickles@cavium.com> Signed-off-by: NFelix Manlunas <felix.manlunas@cavium.com> Signed-off-by: NSatanand Burla <satananda.burla@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 1月, 2017 22 次提交
-
-
由 Jacob Keller 提交于
The debug statistics were removed due to complications with the ethtool statistics API which are not possible to resolve without a new statistics interface. The flag was left behind, but we no longer need it. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
This was accidentally removed when we defeatured the full 1588 Clock support. We need to report the Rx descriptor timestamp value so that applications built on top of the IES API can function properly. Additionally, remove the FM10K_FLAG_RX_TS_ENABLED, as it is not used now that 1588 functionality has been removed. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Scott Peterson 提交于
On packet RX, we perform a dma sync for cpu before passing the packet up. Here we limit that sync to the actual length of the incoming packet, rather than always syncing the entire buffer. Signed-off-by: NScott Peterson <scott.d.peterson@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Ngai-Mint Kwan 提交于
Partially revert commit 5e93cbad ("fm10k: Reset mailbox global interrupts", 2016-06-07) The register bits related to this commit are now solely being handled by the IES API. Recent changes in the IES API will allow an automatic recovery from improper handling of these bits. Signed-off-by: NNgai-Mint Kwan <ngai-mint.kwan@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Ngai-Mint Kwan 提交于
Multiple IES API resets can cause a race condition where the mailbox interrupt request bits can be cleared before being handled. This can leave certain mailbox messages from the PF to be untreated and the PF will enter in some inactive state. If this situation occurs, the IES API will initiate a mailbox version reset which, then, trigger a mailbox state change. Once this mailbox transition occurs (from OPEN to CONNECT state), a request for reset will be returned. This ensures that PF will undergo a reset whenever IES API encounters an unknown global mailbox interrupt event or whenever the IES API terminates. Signed-off-by: NNgai-Mint Kwan <ngai-mint.kwan@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
We don't need to typecast a u8 * into a char *, so just remove the extra variable. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Ngai-Mint Kwan 提交于
Since a pointer "mac" to fm10k_mac_info structure exists, use it to access the contents of its members. Signed-off-by: NNgai-Mint Kwan <ngai-mint.kwan@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Vivien Didelot 提交于
Isolate the HWMON support in DSA in its own file. Currently only the legacy DSA code is concerned. Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Murali Karicheri says: ==================== netcp: enhancements and minor fixes This series is for net-next. This propagates enhancements and minor bug fixes from internal version of the driver to keep the upstream in sync. Please review and apply if this looks good. Tested on all of K2HK/E/L boards with nfs rootfs. Test logs below K2HK-EVM: http://pastebin.ubuntu.com/23754106/ k2L-EVM: http://pastebin.ubuntu.com/23754143/ K2E-EVM: http://pastebin.ubuntu.com/23754159/ History: v1 - dropped 1/10 amd 2/10 of v0 based on comments from Rob as it needs more work before submission v0 - Initial version ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karicheri, Muralidharan 提交于
For NetCP NU Switch ALE, some of the mask bits are different than defaults used in the driver. Add a new macro DEFINE_ALE_FIELD1 that use a configurable mask bits and use it in the driver. These bits are set to correct values by using the new variables added to cpsw_ale structure and re-used in the macros. The parameter nu_switch_ale is configured by the caller driver to indicate the ALE is for that switch and is used in the ALE driver to do customization as needed. Signed-off-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NSekhar Nori <nsekhar@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karicheri, Muralidharan 提交于
ALE h/w on newer version of NetCP (K2E/L/G) does provide a ALE_STATUS register for the size of the ALE Table implemented in h/w. Currently for example we set ALE Table size to 1024 for NetCP ALE on K2E even though the ALE Status/Documentation shows it has 8192 entries. So take advantage of this register to read the size of ALE table supported and use that value in the driver for the newer version of NetCP ALE. For NetCP lite, ALE Table size is much less (64) and indicated by a size of zero in ALE_STATUS. So use that as a default for now. While at it, also fix the ale table size on 10G switch to 2048 per User guide http://www.ti.com/lit/ug/spruhj5/spruhj5.pdfSigned-off-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NSekhar Nori <nsekhar@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karicheri, Muralidharan 提交于
In NU Ethernet switch used on some of the Keystone SoCs, there is separate UNKNOWNVLAN register for membership, unreg mcast flood, reg mcast flood and force untag egress bits in ALE. So control for these fields require different address offset, shift and size of field. As this ALE has the same version number as ALE in CPSW found on other SoCs, customization based on version number is not possible. So use a configuration parameter, nu_switch_ale, to identify the ALE ALE found in NU Switch. Different treatment is needed for NU Switch ALE due to difference in the ale table bits, separate unknown vlan registers etc. The register information available in ale_controls, needs to be updated to support the netcp NU switch h/w. So it is not constant array any more since it needs to be updated based on ALE type. The header of the file is also updated to indicate it supports N port switch ALE, not just 3 port. The version mask is 3 bits in NU Switch ALE vs 8 bits on other ALE types. While at it, change the debug print to info print so that ALE version gets displayed in boot log. Signed-off-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NSekhar Nori <nsekhar@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karicheri, Muralidharan 提交于
Some of the newer Ethernet switch hw (such as that on k2e/l/g) can strip the Etherenet FCS from packet at the port 0 egress of the switch. So use this capability instead of doing it in software. Signed-off-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NSekhar Nori <nsekhar@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karicheri, Muralidharan 提交于
Currently to parse phy-handle, driver doesn't check if the interface is MAC to PHY. This patch add this check for all MAC to PHY interface types supported by the driver. Signed-off-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NSekhar Nori <nsekhar@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Scherban 提交于
Previously the network statistics were stored in 32 bit variable which can cause some stats to roll over after several minutes of high traffic. This implements 64 bit storage so larger numbers can be stored. Signed-off-by: NMichael Scherban <m-scherban@ti.com> Signed-off-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NSekhar Nori <nsekhar@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karicheri, Muralidharan 提交于
The psdata is populated with command data by netcp modules to the tail of the buffer and set_words() copy the same to the front of the psdata. So remove the redundant memmov function call. Signed-off-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karicheri, Muralidharan 提交于
Extract the eflag bits from the received desc and pass it down the rx_hook chain to be available for netcp modules. Also the psdata and epib data has to be inspected by the netcp modules. So the desc can be freed only after returning from the rx_hook. So move knav_pool_desc_put() after the rx_hook processing. Signed-off-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Grygorii Strashko says: ==================== net: ethernet: ti: cpsw: support placing CPDMA descriptors into DDR This series intended to add support for placing CPDMA descriptors into DDR by introducing new module parameter "descs_pool_size" to specify size of descriptor's pool. The "descs_pool_size" defines total number of CPDMA CPPI descriptors to be used for both ingress/egress packets processing. If not specified - the default value 256 will be used which will allow to place descriptor's pool into the internal CPPI RAM. In addition, added ability to re-split CPDMA pool of descriptors between RX and TX path via ethtool '-G' command wich will allow to configure and fix number of descriptors used by RX and TX path, which, then, will be split between RX/TX channels proportionally depending on number of RX/TX channels and its weight. This allows significantly to reduce UDP packets drop rate for bandwidth >301 Mbits/sec (am57x). Before enabling this feature, the am437x SoC has to be fixed as it's proved that it's not working when CPDMA descriptors placed in DDR. So, the patch 1 fixes this issue. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Grygorii Strashko 提交于
Even if no_bd_ram property is described in TI CPSW bindings the support for it has never been introduced in CPSW driver, so there are no real users of it. Hence, remove no_bd_ram property from documentation and DT files. Cc: 'Rob Herring <robh+dt@kernel.org>' Signed-off-by: NGrygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Grygorii Strashko 提交于
The CPDMA uses one pool of descriptors for both RX and TX which by default split between all channels proportionally depending on total number of CPDMA channels and number of TX and RX channels. As result, more descriptors will be consumed by TX path if there are more TX channels and there is no way now to dedicate more descriptors for RX path. So, add the ability to re-split CPDMA pool of descriptors between RX and TX path via ethtool '-G' command wich will allow to configure and fix number of descriptors used by RX and TX path, which, then, will be split between RX/TX channels proportionally depending on RX/TX channels number and weight. ethtool '-G' command will accept only number of RX entries and rest of descriptors will be arranged for TX automatically. Command: ethtool -G <devname> rx <number of descriptors> defaults and limitations: - minimum number of rx descriptors is 10% of total number of descriptors in CPDMA pool - maximum number of rx descriptors is 90% of total number of descriptors in CPDMA pool - by default, descriptors will be split equally between RX/TX path - any values passed in "tx" parameter will be ignored Usage: # ethtool -g eth0 Pre-set maximums: RX: 7372 RX Mini: 0 RX Jumbo: 0 TX: 0 Current hardware settings: RX: 4096 RX Mini: 0 RX Jumbo: 0 TX: 4096 # ethtool -G eth0 rx 7372 # ethtool -g eth0 Ring parameters for eth0: Pre-set maximums: RX: 7372 RX Mini: 0 RX Jumbo: 0 TX: 0 Current hardware settings: RX: 7372 RX Mini: 0 RX Jumbo: 0 TX: 820 Signed-off-by: NGrygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Grygorii Strashko 提交于
The CPSW CPDMA can process buffer descriptors placed as in internal CPPI RAM as in DDR. This patch adds support in CPSW and CPDMA for descs_pool_size mudule parameter, which defines total number of CPDMA CPPI descriptors to be used for both ingress/egress packets processing: - memory size, required for CPDMA descriptor pool, is calculated basing on number of descriptors specified by user in descs_pool_size and CPDMA descriptor size and allocated from coherent memory (CMA area); - CPDMA descriptor pool will be allocated in DDR if pool memory size > internal CPPI RAM or use internal CPPI RAM otherwise; - if descs_pool_size not specified in DT - the default value 256 will be used which will allow to place CPDMA descriptors pool into the internal CPPI RAM (current default behaviour); - CPDMA will ignore descs_pool_size if descs_pool_size = 0 for backward comaptiobility with davinci_emac. descs_pool_size is boot time setting and can't be changed once CPSW/CPDMA is initialized. Signed-off-by: NGrygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-