- 16 5月, 2015 21 次提交
-
-
由 Ying Xue 提交于
Once we get a neighbour through looking up arp cache or creating a new one in rocker_port_ipv4_resolve(), the neighbour's refcount is already taken. But as we don't put the refcount again after it's used, this makes the neighbour entry leaked. Suggested-by: NEric Dumazet <edumazet@google.com> Acked-by: NJiri Pirko <jiri@resnulli.us> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NYing Xue <ying.xue@windriver.com> Acked-by: NJiri Pirko <jiri@resnulli.us> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Tom Lendacky says: ==================== amd-xgbe: AMD XGBE driver updates 2015-05-12 The following series of patches includes functional updates and changes to the driver. - Add additional statistics to be collected and reported - Use the netif_* functions for issuing some debug and informational driver messages - Rx path SKB allocation cleanup/simplification - Remove stand-alone phylib driver and incorporate function into the nic driver - Simplify device tree support while maintaining backwards compatibility - Fix the flow control negotiation logic to properly configure flow control - Remove the checking and setting of the device dma_mask field This patch series is based on net-next. Changes in v2: - Change from using the netif_msg_*/netdev_* combination for issuing messages to the more concise netif_* ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lendacky, Thomas 提交于
The underlying device support will set the device dma_mask pointer if DMA is set up properly for the device. Remove the check for and assignment of dma_mask when it is null. Instead, just error out if the dma_set_mask_and_coherent function fails because dma_mask is null. Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lendacky, Thomas 提交于
The flow control negotiation logic is flawed and does not properly advertise and process auto-negotiation of the flow control settings. Update the flow control support to properly set the flow control auto-negotiation settings and process the results approrpriately. Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lendacky, Thomas 提交于
Simplify the device tree support of the amd-xgbe driver by defining the PHY-related resources within the ethernet device node. The support provides backwards compatibility with the original way. Update the driver version to 1.0.2. Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lendacky, Thomas 提交于
The AMD XGBE device is intended to work with a specific integrated PHY and that PHY is not meant to be a standalone PHY for use by other devices. As such this patch removes the phylib driver and implements the PHY support in the amd-xgbe driver (the majority of the logic from the phylib driver is moved into the amd-xgbe driver). Update the driver version to 1.0.1. Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lendacky, Thomas 提交于
Rework the SKB allocation so that all of the buffers of the first descriptor are handled in the SKB allocation routine. After copying the data in the header buffer (which can be just the header if split header processing succeeded for header plus data if split header processing did not succeed) into the SKB, check for remaining data in the receive buffer. If there is data remaining in the receive buffer, add that as a frag to the SKB. Once an SKB has been allocated, all other descriptors are added as frags to the SKB. Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lendacky, Thomas 提交于
Add support for the network interface message level settings for determining whether to issue some of the driver messages. Make use of the netif_* interface where appropriate. Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Lendacky, Thomas 提交于
Add additional/extended statistics beyond what is provided by the hardware to be reported via ethtool. The new stats focus on the calls into ndo_start_xmit and the napi_poll routine. Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Joachim Eastwood says: ==================== convert stmmac glue layers into platform drivers This patch set aims to convert the current dwmac glue layers into proper platform drivers as request by Arnd[1]. These changes start from patch 3 and onwards. Overview: Platform driver functions like probe and remove are exported from the stmmac platform and then used in subsequent glue later conversions. The conversion involes adding the platform driver boiler plate code and adding it to the build system. The last patch removes the driver from the stmmac platform code thus making it into a library for common platform driver functions. The two first patches adds glue layer for my platform. I chose to first add old style glue layer and then convert it. The churn this creates is just 3 lines. I would be very nice if people could test this patch set on their respective platform. My testing has been limited to compiling and testing on my (LPC18xx) platform. Thanks! Next I will look into cleaning up the stmmac platform code. [1] http://marc.info/?l=linux-arm-kernel&m=143059524606459&w=2 ==================== Tested-by: NChen-Yu Tsai <wens@csie.org> Tested-by: NDinh Nguyen <dinguyen@opensource.altera.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Joachim Eastwood 提交于
The dwmac-generic replaces the driver inside the stmmac platform code. This turns stmmac platform into a library used by drivers for common platform driver functions. Signed-off-by: NJoachim Eastwood <manabian@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Joachim Eastwood 提交于
Convert platform glue layer into a proper platform driver and add it to the build system. Signed-off-by: NJoachim Eastwood <manabian@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Joachim Eastwood 提交于
Convert platform glue layer into a proper platform driver and add it to the build system. Signed-off-by: NJoachim Eastwood <manabian@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Joachim Eastwood 提交于
Convert platform glue layer into a proper platform driver and add it to the build system. Signed-off-by: NJoachim Eastwood <manabian@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Joachim Eastwood 提交于
Convert platform glue layer into a proper platform driver and add it to the build system. Signed-off-by: NJoachim Eastwood <manabian@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Joachim Eastwood 提交于
Convert platform glue layer into a proper platform driver and add it to the build system. Signed-off-by: NJoachim Eastwood <manabian@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Joachim Eastwood 提交于
Convert platform glue layer into a proper platform driver and add it to the build system. Signed-off-by: NJoachim Eastwood <manabian@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Joachim Eastwood 提交于
Create a new driver around the generic device tree match strings in the stmmac platform code. This driver is intended to be used by all platforms that doesn't require any platform specific code to function or is using platform data. Signed-off-by: NJoachim Eastwood <manabian@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Joachim Eastwood 提交于
Prepare the stmmac platform code to support standalone drivers by exporting the need functions and having of_match_device use the match table reference already present in the driver struct. This will allow us to reuse the platform driver functions from this code easily in other stand alone platform drivers. Signed-off-by: NJoachim Eastwood <manabian@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Joachim Eastwood 提交于
Add device tree binding documentation for nxp,lpc1850-dwmac. Signed-off-by: NJoachim Eastwood <manabian@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Joachim Eastwood 提交于
Add support for Ethernet on NXP LPC18xx and LPC43xx using the dwmac driver. This glue is required to setup phy interface mode, MII or RMII, on the SoC. Signed-off-by: NJoachim Eastwood <manabian@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 5月, 2015 19 次提交
-
-
由 sixiao@microsoft.com 提交于
Current code does not lock anything when calculating the TX and RX stats. As a result, the RX and TX data reported by ifconfig are not accuracy in a system with high network throughput and multiple CPUs (in my test, RX/TX = 83% between 2 HyperV VM nodes which have 8 vCPUs and 40G Ethernet). This patch fixed the above issue by using per_cpu stats. netvsc_get_stats64() summarizes TX and RX data by iterating over all CPUs to get their respective stats. This v2 patch addressed David's comments on the cleanup path when netdev_alloc_pcpu_stats() failed. Signed-off-by: NSimon Xiao <sixiao@microsoft.com> Reviewed-by: NK. Y. Srinivasan <kys@microsoft.com> Reviewed-by: NHaiyang Zhang <haiyangz@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Holzheu 提交于
Fix several sparse warnings like: lib/test_bpf.c:1824:25: sparse: constant 4294967295 is so big it is long lib/test_bpf.c:1878:25: sparse: constant 0x0000ffffffff0000 is so big it is long Fixes: cffc642d ("test_bpf: add 173 new testcases for eBPF") Reported-by: NFengguang Wu <fengguang.wu@intel.com> Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com> Acked-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Westphal 提交于
commit d2788d34 ("net: sched: further simplify handle_ing") removed the call to qdisc_enqueue_root(). However, after this removal we no longer set qdisc pkt length. This breaks traffic policing on ingress. This is the minimum fix: set qdisc pkt length before tc_classify. Only setting the length does remove support for 'stab' on ingress, but as Alexei pointed out: "Though it was allowed to add qdisc_size_table to ingress, it's useless. Nothing takes advantage of recomputed qdisc_pkt_len". Jamal suggested to use qdisc_pkt_len_init(), but as Eric mentioned that would result in qdisc_pkt_len_init to no longer get inlined due to the additional 2nd call site. ingress policing is rare and GRO doesn't really work that well with police on ingress, as we see packets > mtu and drop skbs that -- without aggregation -- would still have fitted the policier budget. Thus to have reliable/smooth ingress policing GRO has to be turned off. Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Jamal Hadi Salim <jhs@mojatatu.com> Fixes: d2788d34 ("net: sched: further simplify handle_ing") Acked-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NFlorian Westphal <fw@strlen.de> Acked-by: NEric Dumazet <edumazet@google.com> Acked-by: NAlexei Starovoitov <ast@plumgrid.com> Acked-by: NJamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nicolas Dichtel 提交于
Unlock was missing on error path. Fixes: 95f38411 ("netns: use a spin_lock to protect nsid management") Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Bert Vermeulen 提交于
This also changes mii_bus.phy_mask to u32 for consistency. Signed-off-by: NBert Vermeulen <bert@biot.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Daniel Borkmann 提交于
Couple of torture test cases related to the bug fixed in 0b59d880 ("ARM: net: delegate filter to kernel interpreter when imm_offset() return value can't fit into 12bits."). I've added a helper to allocate and fill the insn space. Output on x86_64 from my laptop: test_bpf: #233 BPF_MAXINSNS: Maximum possible literals jited:0 7 PASS test_bpf: #234 BPF_MAXINSNS: Single literal jited:0 8 PASS test_bpf: #235 BPF_MAXINSNS: Run/add until end jited:0 11553 PASS test_bpf: #236 BPF_MAXINSNS: Too many instructions PASS test_bpf: #237 BPF_MAXINSNS: Very long jump jited:0 9 PASS test_bpf: #238 BPF_MAXINSNS: Ctx heavy transformations jited:0 20329 20398 PASS test_bpf: #239 BPF_MAXINSNS: Call heavy transformations jited:0 32178 32475 PASS test_bpf: #240 BPF_MAXINSNS: Jump heavy test jited:0 10518 PASS test_bpf: #233 BPF_MAXINSNS: Maximum possible literals jited:1 4 PASS test_bpf: #234 BPF_MAXINSNS: Single literal jited:1 4 PASS test_bpf: #235 BPF_MAXINSNS: Run/add until end jited:1 1625 PASS test_bpf: #236 BPF_MAXINSNS: Too many instructions PASS test_bpf: #237 BPF_MAXINSNS: Very long jump jited:1 8 PASS test_bpf: #238 BPF_MAXINSNS: Ctx heavy transformations jited:1 3301 3174 PASS test_bpf: #239 BPF_MAXINSNS: Call heavy transformations jited:1 24107 23491 PASS test_bpf: #240 BPF_MAXINSNS: Jump heavy test jited:1 8651 PASS Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Nicolas Schichan <nschichan@freebox.fr> Acked-by: NAlexei Starovoitov <ast@plumgrid.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Now we allow storing more request socks per listener, we might hit syncookie mode less often and hit following bug in our stack : When we send a burst of syncookies, then exit this mode, tcp_synq_no_recent_overflow() can return false if the ACK packets coming from clients are coming three seconds after the end of syncookie episode. This is a way too strong requirement and conflicts with rest of syncookie code which allows ACK to be aged up to 2 minutes. Perfectly valid ACK packets are dropped just because clients might be in a crowded wifi environment or on another planet. So let's fix this, and also change tcp_synq_overflow() to not dirty a cache line for every syncookie we send, as we are under attack. Signed-off-by: NEric Dumazet <edumazet@google.com> Acked-by: NFlorian Westphal <fw@strlen.de> Acked-by: NYuchung Cheng <ycheng@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alexander Duyck 提交于
The rx_dropped stat wasn't being reported when ip_tunnel_get_stats64 was called. This was leading to some confusing results in my debug as I was seeing rx_errors increment but no other value which pointed me toward the type of error being seen. This change corrects that by using netdev_stats_to_stats64 to copy all available dev stats instead of just the few that were hand picked. Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Willem de Bruijn 提交于
Avoid two xchg calls whose return values were unused, causing a warning on some architectures. The relevant variable is a hint and read without mutual exclusion. This fix makes all writers hold the receive_queue lock. Suggested-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 françois romieu 提交于
None of those drivers uses last_rx for its own needs. See 4dc89133 ("net: add a comment on netdev->last_rx") for reference. Signed-off-by: NFrancois Romieu <romieu@fr.zoreil.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Zhangfei Gao <zhangfei.gao@linaro.org> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: Wingman Kwok <w-kwok2@ti.com> Cc: Murali Karicheri <m-karicheri2@ti.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Florian Fainelli says: ==================== net: phy: broken turn-around support This is an attempt at solving the broken turn-around problem in a way that is not specific to the mdio-gpio driver, since it affects different kinds of platforms. We cannot make that localized to PHY device drivers because probing the PHY device which has a broken turn-around can fail as early as in get_phy_id(), therefore we need a bit of help from Device Tree/platform_data. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
Update mdiobb_read() to read whether the PHY has a broken turn-around, and if it does, ignore it to make the read succeeed. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
Some Ethernet PHY devices/switches may not properly release the MDIO bus during turn-around time, and fail to drive it low, which can be seen by some controllers as a read failure, while the data clocked in is still correct. Add a boolean property "broken-turn-around" which is parsed by the generic MDIO bus probing code and will set the corresponding bit in the MDIO bus phy_ignore_ta_mask bitmask for MDIO bus drivers to utilize that information. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
Some PHY devices/switches will not release the turn-around line as they should do at the end of a MDIO transaction. To help with such situations, allow MDIO bus drivers to be made aware of such restrictions. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ying Xue 提交于
After commit eeb1bd5c ("net: Add a struct net parameter to sock_create_kern"), we should use sock_create_kern() to create kernel socket as the interface doesn't reference count struct net any more. Signed-off-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Brian Haley 提交于
Fix compile error in net/sched/cls_flower.c net/sched/cls_flower.c: In function ‘fl_set_key’: net/sched/cls_flower.c:240:3: error: implicit declaration of function ‘tcf_change_indev’ [-Werror=implicit-function-declaration] err = tcf_change_indev(net, tb[TCA_FLOWER_INDEV]); Introduced in 77b9900e Fixes: 77b9900e ("tc: introduce Flower classifier") Signed-off-by: NBrian Haley <brian.haley@hp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Jon Maloy says: ==================== tipc: some link layer improvements We continue eliminating redundant complexity at the link layer, and add a couple of improvements to the packet sending functionality. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jon Paul Maloy 提交于
Currently, the packet sequence number is updated and added to each packet at the moment a packet is added to the link backlog queue. This is wasteful, since it forces the code to traverse the send packet list packet by packet when adding them to the backlog queue. It would be better to just splice the whole packet list into the backlog queue when that is the right action to do. In this commit, we do this change. Also, since the sequence numbers cannot now be assigned to the packets at the moment they are added the backlog queue, we do instead calculate and add them at the moment of transmission, when the backlog queue has to be traversed anyway. We do this in the function tipc_link_push_packet(). Reviewed-by: NErik Hugne <erik.hugne@ericsson.com> Reviewed-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jon Paul Maloy 提交于
The link congestion algorithm used until now implies two problems. - It is too generous towards lower-level messages in situations of high load by giving "absolute" bandwidth guarantees to the different priority levels. LOW traffic is guaranteed 10%, MEDIUM is guaranted 20%, HIGH is guaranteed 30%, and CRITICAL is guaranteed 40% of the available bandwidth. But, in the absence of higher level traffic, the ratio between two distinct levels becomes unreasonable. E.g. if there is only LOW and MEDIUM traffic on a system, the former is guaranteed 1/3 of the bandwidth, and the latter 2/3. This again means that if there is e.g. one LOW user and 10 MEDIUM users, the former will have 33.3% of the bandwidth, and the others will have to compete for the remainder, i.e. each will end up with 6.7% of the capacity. - Packets of type MSG_BUNDLER are created at SYSTEM importance level, but only after the packets bundled into it have passed the congestion test for their own respective levels. Since bundled packets don't result in incrementing the level counter for their own importance, only occasionally for the SYSTEM level counter, they do in practice obtain SYSTEM level importance. Hence, the current implementation provides a gap in the congestion algorithm that in the worst case may lead to a link reset. We now refine the congestion algorithm as follows: - A message is accepted to the link backlog only if its own level counter, and all superior level counters, permit it. - The importance of a created bundle packet is set according to its contents. A bundle packet created from messges at levels LOW to CRITICAL is given importance level CRITICAL, while a bundle created from a SYSTEM level message is given importance SYSTEM. In the latter case only subsequent SYSTEM level messages are allowed to be bundled into it. This solves the first problem described above, by making the bandwidth guarantee relative to the total number of users at all levels; only the upper limit for each level remains absolute. In the example described above, the single LOW user would use 1/11th of the bandwidth, the same as each of the ten MEDIUM users, but he still has the same guarantee against starvation as the latter ones. The fix also solves the second problem. If the CRITICAL level is filled up by bundle packets of that level, no lower level packets will be accepted any more. Suggested-by: NGergely Kiss <gergely.kiss@ericsson.com> Reviewed-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-