1. 10 1月, 2018 1 次提交
  2. 15 12月, 2017 3 次提交
  3. 02 12月, 2017 9 次提交
    • J
      nfp: bpf: detect load/store sequences lowered from memory copy · 6bc7103c
      Jiong Wang 提交于
      This patch add the optimization frontend, but adding a new eBPF IR scan
      pass "nfp_bpf_opt_ldst_gather".
      
      The pass will traverse the IR to recognize the load/store pairs sequences
      that come from lowering of memory copy builtins.
      
      The gathered memory copy information will be kept in the meta info
      structure of the first load instruction in the sequence and will be
      consumed by the optimization backend added in the previous patches.
      
      NOTE: a sequence with cross memory access doesn't qualify this
      optimization, i.e. if one load in the sequence will load from place that
      has been written by previous store. This is because when we turn the
      sequence into single CPP operation, we are reading all contents at once
      into NFP transfer registers, then write them out as a whole. This is not
      identical with what the original load/store sequence is doing.
      
      Detecting cross memory access for two random pointers will be difficult,
      fortunately under XDP/eBPF's restrictied runtime environment, the copy
      normally happen among map, packet data and stack, they do not overlap with
      each other.
      
      And for cases supported by NFP, cross memory access will only happen on
      PTR_TO_PACKET. Fortunately for this, there is ID information that we could
      do accurate memory alias check.
      Signed-off-by: NJiong Wang <jiong.wang@netronome.com>
      Reviewed-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      6bc7103c
    • J
      nfp: bpf: implement memory bulk copy for length bigger than 32-bytes · 8c900538
      Jiong Wang 提交于
      When the gathered copy length is bigger than 32-bytes and within 128-bytes
      (the maximum length a single CPP Pull/Push request can finish), the
      strategy of read/write are changeed into:
      
        * Read.
            - use direct reference mode when length is within 32-bytes.
            - use indirect mode when length is bigger than 32-bytes.
      
        * Write.
            - length <= 8-bytes
              use write8 (direct_ref).
            - length <= 32-byte and 4-bytes aligned
              use write32 (direct_ref).
            - length <= 32-bytes but not 4-bytes aligned
              use write8 (indirect_ref).
            - length > 32-bytes and 4-bytes aligned
              use write32 (indirect_ref).
            - length > 32-bytes and not 4-bytes aligned and <= 40-bytes
              use write32 (direct_ref) to finish the first 32-bytes.
              use write8 (direct_ref) to finish all remaining hanging part.
            - length > 32-bytes and not 4-bytes aligned
              use write32 (indirect_ref) to finish those 4-byte aligned parts.
              use write8 (direct_ref) to finish all remaining hanging part.
      Signed-off-by: NJiong Wang <jiong.wang@netronome.com>
      Reviewed-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      8c900538
    • J
      nfp: bpf: implement memory bulk copy for length within 32-bytes · 9879a381
      Jiong Wang 提交于
      For NFP, we want to re-group a sequence of load/store pairs lowered from
      memcpy/memmove into single memory bulk operation which then could be
      accelerated using NFP CPP bus.
      
      This patch extends the existing load/store auxiliary information by adding
      two new fields:
      
      	struct bpf_insn *paired_st;
      	s16 ldst_gather_len;
      
      Both fields are supposed to be carried by the the load instruction at the
      head of the sequence. "paired_st" is the corresponding store instruction at
      the head and "ldst_gather_len" is the gathered length.
      
      If "ldst_gather_len" is negative, then the sequence is doing memory
      load/store in descending order, otherwise it is in ascending order. We need
      this information to detect overlapped memory access.
      
      This patch then optimize memory bulk copy when the copy length is within
      32-bytes.
      
      The strategy of read/write used is:
      
        * Read.
          Use read32 (direct_ref), always.
      
        * Write.
          - length <= 8-bytes
            write8 (direct_ref).
          - length <= 32-bytes and is 4-byte aligned
            write32 (direct_ref).
          - length <= 32-bytes but is not 4-byte aligned
            write8 (indirect_ref).
      
      NOTE: the optimization should not change program semantics. The destination
      register of the last load instruction should contain the same value before
      and after this optimization.
      Signed-off-by: NJiong Wang <jiong.wang@netronome.com>
      Reviewed-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      9879a381
    • J
      nfp: bpf: encode indirect commands · 5468a8b9
      Jakub Kicinski 提交于
      Add support for emitting commands with field overwrites.
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      5468a8b9
    • J
      nfp: bpf: correct the encoding for No-Dest immed · 3239e7bb
      Jiong Wang 提交于
      When immed is used with No-Dest, the emitter should use reg.dst instead of
      reg.areg for the destination, using the latter will actually encode
      register zero.
      Signed-off-by: NJiong Wang <jiong.wang@netronome.com>
      Reviewed-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      3239e7bb
    • J
      nfp: bpf: don't do ld/shifts combination if shifts are jump destination · 29fe46ef
      Jiong Wang 提交于
      If any of the shift insns in the ld/shift sequence is jump destination,
      don't do combination.
      Signed-off-by: NJiong Wang <jiong.wang@netronome.com>
      Reviewed-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      29fe46ef
    • J
      nfp: bpf: don't do ld/mask combination if mask is jump destination · 1266f5d6
      Jiong Wang 提交于
      If the mask insn in the ld/mask pair is jump destination, then don't do
      combination.
      Signed-off-by: NJiong Wang <jiong.wang@netronome.com>
      Reviewed-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      1266f5d6
    • J
      nfp: bpf: record jump destination to simplify jump fixup · 5b674140
      Jiong Wang 提交于
      eBPF insns are internally organized as dual-list inside NFP offload JIT.
      Random access to an insn needs to be done by either forward or backward
      traversal along the list.
      
      One place we need to do such traversal is at nfp_fixup_branches where one
      traversal is needed for each jump insn to find the destination. Such
      traversals could be avoided if jump destinations are collected through a
      single travesal in a pre-scan pass, and such information could also be
      useful in other places where jump destination info are needed.
      
      This patch adds such jump destination collection in nfp_prog_prepare.
      Suggested-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NJiong Wang <jiong.wang@netronome.com>
      Reviewed-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      5b674140
    • J
      nfp: bpf: support backward jump · 854dc87d
      Jiong Wang 提交于
      This patch adds support for backward jump on NFP.
      
        - restrictions on backward jump in various functions have been removed.
        - nfp_fixup_branches now supports backward jump.
      
      There is one thing to note, currently an input eBPF JMP insn may generate
      several NFP insns, for example,
      
        NFP imm move insn A \
        NFP compare insn  B  --> 3 NFP insn jited from eBPF JMP insn M
        NFP branch insn   C /
        ---
        NFP insn X           --> 1 NFP insn jited from eBPF insn N
        ---
        ...
      
      therefore, we are doing sanity check to make sure the last jited insn from
      an eBPF JMP is a NFP branch instruction.
      
      Once backward jump is allowed, it is possible an eBPF JMP insn is at the
      end of the program. This is however causing trouble for the sanity check.
      Because the sanity check requires the end index of the NFP insns jited from
      one eBPF insn while only the start index is recorded before this patch that
      we can only get the end index by:
      
        start_index_of_the_next_eBPF_insn - 1
      
      or for the above example:
      
        start_index_of_eBPF_insn_N (which is the index of NFP insn X) - 1
      
      nfp_fixup_branches was using nfp_for_each_insn_walk2 to expose *next* insn
      to each iteration during the traversal so the last index could be
      calculated from which. Now, it needs some extra code to handle the last
      insn. Meanwhile, the use of walk2 is actually unnecessary, we could simply
      use generic single instruction walk to do this, the next insn could be
      easily calculated using list_next_entry.
      
      So, this patch migrates the jump fixup traversal method to
      *list_for_each_entry*, this simplifies the code logic a little bit.
      
      The other thing to note is a new state variable "last_bpf_off" is
      introduced to track the index of the last jited NFP insn. This is necessary
      because NFP is generating special purposes epilogue sequences, so the index
      of the last jited NFP insn is *not* always nfp_prog->prog_len - 1.
      Suggested-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      854dc87d
  4. 05 11月, 2017 5 次提交
  5. 02 11月, 2017 2 次提交
  6. 24 10月, 2017 8 次提交
  7. 15 10月, 2017 10 次提交
  8. 10 10月, 2017 2 次提交