1. 13 1月, 2018 3 次提交
  2. 10 1月, 2018 11 次提交
  3. 06 1月, 2018 1 次提交
  4. 07 12月, 2017 1 次提交
  5. 22 11月, 2017 1 次提交
    • B
      ixgbe: Fix skb list corruption on Power systems · 0a9a17e3
      Brian King 提交于
      This patch fixes an issue seen on Power systems with ixgbe which results
      in skb list corruption and an eventual kernel oops. The following is what
      was observed:
      
      CPU 1                                   CPU2
      ============================            ============================
      1: ixgbe_xmit_frame_ring                ixgbe_clean_tx_irq
      2:  first->skb = skb                     eop_desc = tx_buffer->next_to_watch
      3:  ixgbe_tx_map                         read_barrier_depends()
      4:   wmb                                 check adapter written status bit
      5:   first->next_to_watch = tx_desc      napi_consume_skb(tx_buffer->skb ..);
      6:   writel(i, tx_ring->tail);
      
      The read_barrier_depends is insufficient to ensure that tx_buffer->skb does not
      get loaded prior to tx_buffer->next_to_watch, which then results in loading
      a stale skb pointer. This patch replaces the read_barrier_depends with
      smp_rmb to ensure loads are ordered with respect to the load of
      tx_buffer->next_to_watch.
      
      Cc: stable <stable@vger.kernel.org>
      Signed-off-by: NBrian King <brking@linux.vnet.ibm.com>
      Acked-by: NJesse Brandeburg <jesse.brandeburg@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      0a9a17e3
  6. 08 11月, 2017 1 次提交
  7. 05 11月, 2017 1 次提交
  8. 02 11月, 2017 1 次提交
  9. 26 10月, 2017 1 次提交
  10. 25 10月, 2017 1 次提交
    • M
      locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns... · 6aa7de05
      Mark Rutland 提交于
      locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns to READ_ONCE()/WRITE_ONCE()
      
      Please do not apply this to mainline directly, instead please re-run the
      coccinelle script shown below and apply its output.
      
      For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
      preference to ACCESS_ONCE(), and new code is expected to use one of the
      former. So far, there's been no reason to change most existing uses of
      ACCESS_ONCE(), as these aren't harmful, and changing them results in
      churn.
      
      However, for some features, the read/write distinction is critical to
      correct operation. To distinguish these cases, separate read/write
      accessors must be used. This patch migrates (most) remaining
      ACCESS_ONCE() instances to {READ,WRITE}_ONCE(), using the following
      coccinelle script:
      
      ----
      // Convert trivial ACCESS_ONCE() uses to equivalent READ_ONCE() and
      // WRITE_ONCE()
      
      // $ make coccicheck COCCI=/home/mark/once.cocci SPFLAGS="--include-headers" MODE=patch
      
      virtual patch
      
      @ depends on patch @
      expression E1, E2;
      @@
      
      - ACCESS_ONCE(E1) = E2
      + WRITE_ONCE(E1, E2)
      
      @ depends on patch @
      expression E;
      @@
      
      - ACCESS_ONCE(E)
      + READ_ONCE(E)
      ----
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: davem@davemloft.net
      Cc: linux-arch@vger.kernel.org
      Cc: mpe@ellerman.id.au
      Cc: shuah@kernel.org
      Cc: snitzer@redhat.com
      Cc: thor.thayer@linux.intel.com
      Cc: tj@kernel.org
      Cc: viro@zeniv.linux.org.uk
      Cc: will.deacon@arm.com
      Link: http://lkml.kernel.org/r/1508792849-3115-19-git-send-email-paulmck@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6aa7de05
  11. 21 10月, 2017 2 次提交
  12. 18 10月, 2017 1 次提交
  13. 10 10月, 2017 4 次提交
  14. 09 10月, 2017 2 次提交
  15. 27 9月, 2017 2 次提交
    • D
      bpf, ixgbe: add meta data support · 366a88fe
      Daniel Borkmann 提交于
      Implement support for transferring XDP meta data into skb for
      ixgbe driver; before calling into the program, xdp.data_meta points
      to xdp.data, where on program return with pass verdict, we call
      into skb_metadata_set().
      
      We implement this for the default ixgbe_build_skb() variant. For the
      ixgbe_construct_skb() that is used when legacy-rx buffer mananagement
      mode is turned on via ethtool, I found that XDP gets 0 headroom, so
      neither xdp_adjust_head() nor xdp_adjust_meta() can be used with this.
      Just add a comment with explanation for this operating mode.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      366a88fe
    • D
      bpf: add meta pointer for direct access · de8f3a83
      Daniel Borkmann 提交于
      This work enables generic transfer of metadata from XDP into skb. The
      basic idea is that we can make use of the fact that the resulting skb
      must be linear and already comes with a larger headroom for supporting
      bpf_xdp_adjust_head(), which mangles xdp->data. Here, we base our work
      on a similar principle and introduce a small helper bpf_xdp_adjust_meta()
      for adjusting a new pointer called xdp->data_meta. Thus, the packet has
      a flexible and programmable room for meta data, followed by the actual
      packet data. struct xdp_buff is therefore laid out that we first point
      to data_hard_start, then data_meta directly prepended to data followed
      by data_end marking the end of packet. bpf_xdp_adjust_head() takes into
      account whether we have meta data already prepended and if so, memmove()s
      this along with the given offset provided there's enough room.
      
      xdp->data_meta is optional and programs are not required to use it. The
      rationale is that when we process the packet in XDP (e.g. as DoS filter),
      we can push further meta data along with it for the XDP_PASS case, and
      give the guarantee that a clsact ingress BPF program on the same device
      can pick this up for further post-processing. Since we work with skb
      there, we can also set skb->mark, skb->priority or other skb meta data
      out of BPF, thus having this scratch space generic and programmable
      allows for more flexibility than defining a direct 1:1 transfer of
      potentially new XDP members into skb (it's also more efficient as we
      don't need to initialize/handle each of such new members). The facility
      also works together with GRO aggregation. The scratch space at the head
      of the packet can be multiple of 4 byte up to 32 byte large. Drivers not
      yet supporting xdp->data_meta can simply be set up with xdp->data_meta
      as xdp->data + 1 as bpf_xdp_adjust_meta() will detect this and bail out,
      such that the subsequent match against xdp->data for later access is
      guaranteed to fail.
      
      The verifier treats xdp->data_meta/xdp->data the same way as we treat
      xdp->data/xdp->data_end pointer comparisons. The requirement for doing
      the compare against xdp->data is that it hasn't been modified from it's
      original address we got from ctx access. It may have a range marking
      already from prior successful xdp->data/xdp->data_end pointer comparisons
      though.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      de8f3a83
  16. 25 8月, 2017 1 次提交
  17. 19 8月, 2017 2 次提交
  18. 12 8月, 2017 1 次提交
  19. 08 8月, 2017 3 次提交