1. 29 3月, 2018 1 次提交
  2. 07 2月, 2018 1 次提交
  3. 23 1月, 2018 1 次提交
  4. 17 1月, 2018 1 次提交
  5. 15 1月, 2018 6 次提交
  6. 10 1月, 2018 4 次提交
  7. 31 12月, 2017 1 次提交
  8. 21 12月, 2017 1 次提交
  9. 15 12月, 2017 3 次提交
  10. 02 12月, 2017 5 次提交
    • J
      nfp: bpf: implement memory bulk copy for length within 32-bytes · 9879a381
      Jiong Wang 提交于
      For NFP, we want to re-group a sequence of load/store pairs lowered from
      memcpy/memmove into single memory bulk operation which then could be
      accelerated using NFP CPP bus.
      
      This patch extends the existing load/store auxiliary information by adding
      two new fields:
      
      	struct bpf_insn *paired_st;
      	s16 ldst_gather_len;
      
      Both fields are supposed to be carried by the the load instruction at the
      head of the sequence. "paired_st" is the corresponding store instruction at
      the head and "ldst_gather_len" is the gathered length.
      
      If "ldst_gather_len" is negative, then the sequence is doing memory
      load/store in descending order, otherwise it is in ascending order. We need
      this information to detect overlapped memory access.
      
      This patch then optimize memory bulk copy when the copy length is within
      32-bytes.
      
      The strategy of read/write used is:
      
        * Read.
          Use read32 (direct_ref), always.
      
        * Write.
          - length <= 8-bytes
            write8 (direct_ref).
          - length <= 32-bytes and is 4-byte aligned
            write32 (direct_ref).
          - length <= 32-bytes but is not 4-byte aligned
            write8 (indirect_ref).
      
      NOTE: the optimization should not change program semantics. The destination
      register of the last load instruction should contain the same value before
      and after this optimization.
      Signed-off-by: NJiong Wang <jiong.wang@netronome.com>
      Reviewed-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      9879a381
    • J
      nfp: bpf: factor out is_mbpf_load & is_mbpf_store · 5e4d6d20
      Jiong Wang 提交于
      It is usual that we need to check if one BPF insn is for loading/storeing
      data from/to memory.
      
      Therefore, it makes sense to factor out related code to become common
      helper functions.
      Signed-off-by: NJiong Wang <jiong.wang@netronome.com>
      Reviewed-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      5e4d6d20
    • J
      nfp: bpf: flag jump destination to guide insn combine optimizations · a09d5c52
      Jiong Wang 提交于
      NFP eBPF offload JIT engine is doing some instruction combine based
      optimizations which however must not be safe if the combined sequences
      are across basic block boarders.
      
      Currently, there are post checks during fixing jump destinations. If the
      jump destination is found to be eBPF insn that has been combined into
      another one, then JIT engine will raise error and abort.
      
      This is not optimal. The JIT engine ought to disable the optimization on
      such cross-bb-border sequences instead of abort.
      
      As there is no control flow information in eBPF infrastructure that we
      can't do basic block based optimizations, this patch extends the existing
      jump destination record pass to also flag the jump destination, then in
      instruction combine passes we could skip the optimizations if insns in the
      sequence are jump targets.
      Suggested-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NJiong Wang <jiong.wang@netronome.com>
      Reviewed-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      a09d5c52
    • J
      nfp: bpf: record jump destination to simplify jump fixup · 5b674140
      Jiong Wang 提交于
      eBPF insns are internally organized as dual-list inside NFP offload JIT.
      Random access to an insn needs to be done by either forward or backward
      traversal along the list.
      
      One place we need to do such traversal is at nfp_fixup_branches where one
      traversal is needed for each jump insn to find the destination. Such
      traversals could be avoided if jump destinations are collected through a
      single travesal in a pre-scan pass, and such information could also be
      useful in other places where jump destination info are needed.
      
      This patch adds such jump destination collection in nfp_prog_prepare.
      Suggested-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NJiong Wang <jiong.wang@netronome.com>
      Reviewed-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      5b674140
    • J
      nfp: bpf: support backward jump · 854dc87d
      Jiong Wang 提交于
      This patch adds support for backward jump on NFP.
      
        - restrictions on backward jump in various functions have been removed.
        - nfp_fixup_branches now supports backward jump.
      
      There is one thing to note, currently an input eBPF JMP insn may generate
      several NFP insns, for example,
      
        NFP imm move insn A \
        NFP compare insn  B  --> 3 NFP insn jited from eBPF JMP insn M
        NFP branch insn   C /
        ---
        NFP insn X           --> 1 NFP insn jited from eBPF insn N
        ---
        ...
      
      therefore, we are doing sanity check to make sure the last jited insn from
      an eBPF JMP is a NFP branch instruction.
      
      Once backward jump is allowed, it is possible an eBPF JMP insn is at the
      end of the program. This is however causing trouble for the sanity check.
      Because the sanity check requires the end index of the NFP insns jited from
      one eBPF insn while only the start index is recorded before this patch that
      we can only get the end index by:
      
        start_index_of_the_next_eBPF_insn - 1
      
      or for the above example:
      
        start_index_of_eBPF_insn_N (which is the index of NFP insn X) - 1
      
      nfp_fixup_branches was using nfp_for_each_insn_walk2 to expose *next* insn
      to each iteration during the traversal so the last index could be
      calculated from which. Now, it needs some extra code to handle the last
      insn. Meanwhile, the use of walk2 is actually unnecessary, we could simply
      use generic single instruction walk to do this, the next insn could be
      easily calculated using list_next_entry.
      
      So, this patch migrates the jump fixup traversal method to
      *list_for_each_entry*, this simplifies the code logic a little bit.
      
      The other thing to note is a new state variable "last_bpf_off" is
      introduced to track the index of the last jited NFP insn. This is necessary
      because NFP is generating special purposes epilogue sequences, so the index
      of the last jited NFP insn is *not* always nfp_prog->prog_len - 1.
      Suggested-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      854dc87d
  11. 05 11月, 2017 8 次提交
  12. 27 10月, 2017 1 次提交
  13. 24 10月, 2017 3 次提交
  14. 15 10月, 2017 1 次提交
  15. 10 10月, 2017 3 次提交