1. 11 2月, 2017 1 次提交
    • C
      xprtrdma: Fix Read chunk padding · 24abdf1b
      Chuck Lever 提交于
      When pad optimization is disabled, rpcrdma_convert_iovs still
      does not add explicit XDR round-up padding to a Read chunk.
      
      Commit 677eb17e ("xprtrdma: Fix XDR tail buffer marshalling")
      incorrectly short-circuited the test for whether round-up padding
      is needed that appears later in rpcrdma_convert_iovs.
      
      However, if this is indeed a regular Read chunk (and not a
      Position-Zero Read chunk), the tail iovec _always_ contains the
      chunk's padding, and never anything else.
      
      So, it's easy to just skip the tail when padding optimization is
      enabled, and add the tail in a subsequent Read chunk segment, if
      disabled.
      
      Fixes: 677eb17e ("xprtrdma: Fix XDR tail buffer marshalling")
      Cc: stable@vger.kernel.org # v4.9+
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      24abdf1b
  2. 10 2月, 2017 4 次提交
  3. 09 2月, 2017 7 次提交
  4. 31 1月, 2017 1 次提交
    • N
      SUNRPC: two small improvements to rpcauth shrinker. · 4c3ffd05
      NeilBrown 提交于
      1/ If we find an entry that is too young to be pruned,
        return SHRINK_STOP to ensure we don't get called again.
        This is more correct, and avoids wasting a little CPU time.
        Prior to 3.12, it can prevent drop_slab() from spinning indefinitely.
      
      2/ Return a precise number from rpcauth_cache_shrink_count(), rather than
        rounding down to a multiple of 100 (of whatever sysctl_vfs_cache_pressure is).
        This ensures that when we "echo 3 > /proc/sys/vm/drop_caches", this cache is
        still purged, even if it has fewer than 100 entires.
      
      Neither of these are really important, they just make behaviour
      more predicatable, which can be helpful when debugging related issues.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      4c3ffd05
  5. 27 1月, 2017 1 次提交
  6. 26 1月, 2017 4 次提交
  7. 25 1月, 2017 16 次提交
  8. 24 1月, 2017 4 次提交
    • J
      mac80211: don't try to sleep in rate_control_rate_init() · 115865fa
      Johannes Berg 提交于
      In my previous patch, I missed that rate_control_rate_init() is
      called from some places that cannot sleep, so it cannot call
      ieee80211_recalc_min_chandef(). Remove that call for now to fix
      the context bug, we'll have to find a different way to fix the
      minimum channel width issue.
      
      Fixes: 96aa2e7c ("mac80211: calculate min channel width correctly")
      Reported-by: NXiaolong Ye (via lkp-robot) <xiaolong.ye@intel.com>
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      115865fa
    • L
      netfilter: nf_tables: validate the name size when possible · b2fbd044
      Liping Zhang 提交于
      Currently, if the user add a stateful object with the name size exceed
      NFT_OBJ_MAXNAMELEN - 1 (i.e. 31), we truncate it down to 31 silently.
      This is not friendly, furthermore, this will cause duplicated stateful
      objects when the first 31 characters of the name is same. So limit the
      stateful object's name size to NFT_OBJ_MAXNAMELEN - 1.
      
      After apply this patch, error message will be printed out like this:
        # name_32=$(printf "%0.sQ" {1..32})
        # nft add counter filter $name_32
        <cmdline>:1:1-52: Error: Could not process rule: Numerical result out
        of range
        add counter filter QQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQ
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      
      Also this patch cleans up the codes which missing the name size limit
      validation in nftables.
      
      Fixes: e5009240 ("netfilter: nf_tables: add stateful objects")
      Signed-off-by: NLiping Zhang <zlpnobody@gmail.com>
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      b2fbd044
    • F
      net: dsa: Check return value of phy_connect_direct() · 4078b76c
      Florian Fainelli 提交于
      We need to check the return value of phy_connect_direct() in
      dsa_slave_phy_connect() otherwise we may be continuing the
      initialization of a slave network device with a PHY that already
      attached somewhere else and which will soon be in error because the PHY
      device is in error.
      
      The conditions for such an error to occur are that we have a port of our
      switch that is not disabled, and has the same port number as a PHY
      address (say both 5) that can be probed using the DSA slave MII bus. We
      end-up having this slave network device find a PHY at the same address
      as our port number, and we try to attach to it.
      
      A slave network (e.g: port 0) has already attached to our PHY device,
      and we try to re-attach it with a different network device, but since we
      ignore the error we would end-up initializating incorrect device
      references by the time the slave network interface is opened.
      
      The code has been (re)organized several times, making it hard to provide
      an exact Fixes tag, this is a bugfix nonetheless.
      Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4078b76c
    • D
      net: mpls: Fix multipath selection for LSR use case · 9f427a0e
      David Ahern 提交于
      MPLS multipath for LSR is broken -- always selecting the first nexthop
      in the one label case. For example:
      
          $ ip -f mpls ro ls
          100
                  nexthop as to 200 via inet 172.16.2.2  dev virt12
                  nexthop as to 300 via inet 172.16.3.2  dev virt13
          101
                  nexthop as to 201 via inet6 2000:2::2  dev virt12
                  nexthop as to 301 via inet6 2000:3::2  dev virt13
      
      In this example incoming packets have a single MPLS labels which means
      BOS bit is set. The BOS bit is passed from mpls_forward down to
      mpls_multipath_hash which never processes the hash loop because BOS is 1.
      
      Update mpls_multipath_hash to process the entire label stack. mpls_hdr_len
      tracks the total mpls header length on each pass (on pass N mpls_hdr_len
      is N * sizeof(mpls_shim_hdr)). When the label is found with the BOS set
      it verifies the skb has sufficient header for ipv4 or ipv6, and find the
      IPv4 and IPv6 header by using the last mpls_hdr pointer and adding 1 to
      advance past it.
      
      With these changes I have verified the code correctly sees the label,
      BOS, IPv4 and IPv6 addresses in the network header and icmp/tcp/udp
      traffic for ipv4 and ipv6 are distributed across the nexthops.
      
      Fixes: 1c78efa8 ("mpls: flow-based multipath selection")
      Acked-by: NRobert Shearman <rshearma@brocade.com>
      Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9f427a0e
  9. 21 1月, 2017 2 次提交