1. 07 7月, 2017 1 次提交
  2. 05 7月, 2017 25 次提交
  3. 04 7月, 2017 7 次提交
  4. 03 7月, 2017 7 次提交
    • E
      net: make sk_ehashfn() static · 784c372a
      Eric Dumazet 提交于
      sk_ehashfn() is only used from a single file.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      784c372a
    • J
      vxlan: fix hlist corruption · 69e76661
      Jiri Benc 提交于
      It's not a good idea to add the same hlist_node to two different hash lists.
      This leads to various hard to debug memory corruptions.
      
      Fixes: b1be00a6 ("vxlan: support both IPv4 and IPv6 sockets in a single vxlan device")
      Signed-off-by: NJiri Benc <jbenc@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      69e76661
    • D
      bpf: simplify narrower ctx access · f96da094
      Daniel Borkmann 提交于
      This work tries to make the semantics and code around the
      narrower ctx access a bit easier to follow. Right now
      everything is done inside the .is_valid_access(). Offset
      matching is done differently for read/write types, meaning
      writes don't support narrower access and thus matching only
      on offsetof(struct foo, bar) is enough whereas for read
      case that supports narrower access we must check for
      offsetof(struct foo, bar) + offsetof(struct foo, bar) +
      sizeof(<bar>) - 1 for each of the cases. For read cases of
      individual members that don't support narrower access (like
      packet pointers or skb->cb[] case which has its own narrow
      access logic), we check as usual only offsetof(struct foo,
      bar) like in write case. Then, for the case where narrower
      access is allowed, we also need to set the aux info for the
      access. Meaning, ctx_field_size and converted_op_size have
      to be set. First is the original field size e.g. sizeof(<bar>)
      as in above example from the user facing ctx, and latter
      one is the target size after actual rewrite happened, thus
      for the kernel facing ctx. Also here we need the range match
      and we need to keep track changing convert_ctx_access() and
      converted_op_size from is_valid_access() as both are not at
      the same location.
      
      We can simplify the code a bit: check_ctx_access() becomes
      simpler in that we only store ctx_field_size as a meta data
      and later in convert_ctx_accesses() we fetch the target_size
      right from the location where we do convert. Should the verifier
      be misconfigured we do reject for BPF_WRITE cases or target_size
      that are not provided. For the subsystems, we always work on
      ranges in is_valid_access() and add small helpers for ranges
      and narrow access, convert_ctx_accesses() sets target_size
      for the relevant instruction.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Cc: Yonghong Song <yhs@fb.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f96da094
    • D
      bpf: add bpf_skb_adjust_room helper · 2be7e212
      Daniel Borkmann 提交于
      This work adds a helper that can be used to adjust net room of an
      skb. The helper is generic and can be further extended in future.
      Main use case is for having a programmatic way to add/remove room to
      v4/v6 header options along with cls_bpf on egress and ingress hook
      of the data path. It reuses most of the infrastructure that we added
      for the bpf_skb_change_type() helper which can be used in nat64
      translations. Similarly, the helper only takes care of adjusting the
      room so that related data is populated and csum adapted out of the
      BPF program using it.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2be7e212
    • D
      bpf, net: add skb_mac_header_len helper · 0daf4349
      Daniel Borkmann 提交于
      Add a small skb_mac_header_len() helper similarly as the
      skb_network_header_len() we have and replace open coded
      places in BPF's bpf_skb_change_proto() helper. Will also
      be used in upcoming work.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0daf4349
    • L
      net: phy: Add phy loopback support in net phy framework · f0f9b4ed
      Lin Yun Sheng 提交于
      This patch add set_loopback in phy_driver, which is used by MAC
      driver to enable or disable phy loopback. it also add a generic
      genphy_loopback function, which use BMCR loopback bit to enable
      or disable loopback.
      Signed-off-by: NLin Yun Sheng <linyunsheng@huawei.com>
      Reviewed-by: NAndrew Lunn <andrew@lunn.ch>
      Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f0f9b4ed
    • J
      net: cdc_ncm: Reduce memory use when kernel memory low · e1069bbf
      Jim Baxter 提交于
      The CDC-NCM driver can require large amounts of memory to create
      skb's and this can be a problem when the memory becomes fragmented.
      
      This especially affects embedded systems that have constrained
      resources but wish to maximise the throughput of CDC-NCM with 16KiB
      NTB's.
      
      The issue is after running for a while the kernel memory can become
      fragmented and it needs compacting.
      If the NTB allocation is needed before the memory has been compacted
      the atomic allocation can fail which can cause increased latency,
      large re-transmissions or disconnections depending upon the data
      being transmitted at the time.
      This situation occurs for less than a second until the kernel has
      compacted the memory but the failed devices can take a lot longer to
      recover from the failed TX packets.
      
      To ease this temporary situation I modified the CDC-NCM TX path to
      temporarily switch into a reduced memory mode which allocates an NTB
      that will fit into a USB_CDC_NCM_NTB_MIN_OUT_SIZE (default 2048 Bytes)
      sized memory block and only transmit NTB's with a single network frame
      until the memory situation is resolved.
      Each time this issue occurs we wait for an increasing number of
      reduced size allocations before requesting a full size one to not
      put additional pressure on a low memory system.
      
      Once the memory is compacted the CDC-NCM data can resume transmitting
      at the normal tx_max rate once again.
      Signed-off-by: NJim Baxter <jim_baxter@mentor.com>
      Reviewed-by: NBjørn Mork <bjorn@mork.no>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e1069bbf