1. 25 7月, 2016 1 次提交
  2. 21 7月, 2016 1 次提交
  3. 20 7月, 2016 4 次提交
  4. 16 7月, 2016 1 次提交
    • D
      bpf: avoid stack copy and use skb ctx for event output · 555c8a86
      Daniel Borkmann 提交于
      This work addresses a couple of issues bpf_skb_event_output()
      helper currently has: i) We need two copies instead of just a
      single one for the skb data when it should be part of a sample.
      The data can be non-linear and thus needs to be extracted via
      bpf_skb_load_bytes() helper first, and then copied once again
      into the ring buffer slot. ii) Since bpf_skb_load_bytes()
      currently needs to be used first, the helper needs to see a
      constant size on the passed stack buffer to make sure BPF
      verifier can do sanity checks on it during verification time.
      Thus, just passing skb->len (or any other non-constant value)
      wouldn't work, but changing bpf_skb_load_bytes() is also not
      the proper solution, since the two copies are generally still
      needed. iii) bpf_skb_load_bytes() is just for rather small
      buffers like headers, since they need to sit on the limited
      BPF stack anyway. Instead of working around in bpf_skb_load_bytes(),
      this work improves the bpf_skb_event_output() helper to address
      all 3 at once.
      
      We can make use of the passed in skb context that we have in
      the helper anyway, and use some of the reserved flag bits as
      a length argument. The helper will use the new __output_custom()
      facility from perf side with bpf_skb_copy() as callback helper
      to walk and extract the data. It will pass the data for setup
      to bpf_event_output(), which generates and pushes the raw record
      with an additional frag part. The linear data used in the first
      frag of the record serves as programmatically defined meta data
      passed along with the appended sample.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      555c8a86
  5. 15 7月, 2016 1 次提交
  6. 12 7月, 2016 3 次提交
    • X
      sctp: add SCTP_PR_ASSOC_STATUS on sctp sockopt · 826d253d
      Xin Long 提交于
      This patch adds SCTP_PR_ASSOC_STATUS to sctp sockopt, which is used
      to dump the prsctp statistics info from the asoc. The prsctp statistics
      includes abandoned_sent/unsent from the asoc. abandoned_sent is the
      count of the packets we drop packets from retransmit/transmited queue,
      and abandoned_unsent is the count of the packets we drop from out_queue
      according to the policy.
      
      Note: another option for prsctp statistics dump described in rfc is
      SCTP_PR_STREAM_STATUS, which is used to dump the prsctp statistics
      info from each stream. But by now, linux doesn't yet have per stream
      statistics info, it needs rfc6525 to be implemented. As the prsctp
      statistics for each stream has to be based on per stream statistics,
      we will delay it until rfc6525 is done in linux.
      Signed-off-by: NXin Long <lucien.xin@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      826d253d
    • X
      sctp: add SCTP_DEFAULT_PRINFO into sctp sockopt · f959fb44
      Xin Long 提交于
      This patch adds SCTP_DEFAULT_PRINFO to sctp sockopt. It is used
      to set/get sctp Partially Reliable Policies' default params,
      which includes 3 policies (ttl, rtx, prio) and their values.
      
      Still, if we set policy params in sndinfo, we will use the params
      of sndinfo against chunks, instead of the default params.
      
      In this patch, we will use 5-8bit of sp/asoc->default_flags
      to store prsctp policies, and reuse asoc->default_timetolive
      to store their values. It means if we enable and set prsctp
      policy, prior ttl timeout in sctp will not work any more.
      Signed-off-by: NXin Long <lucien.xin@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f959fb44
    • X
      sctp: add SCTP_PR_SUPPORTED on sctp sockopt · 28aa4c26
      Xin Long 提交于
      According to section 4.5 of rfc7496, prsctp_enable should be per asoc.
      We will add prsctp_enable to both asoc and ep, and replace the places
      where it used net.sctp->prsctp_enable with asoc->prsctp_enable.
      
      ep->prsctp_enable will be initialized with net.sctp->prsctp_enable, and
      asoc->prsctp_enable will be initialized with ep->prsctp_enable. We can
      also modify it's value through sockopt SCTP_PR_SUPPORTED.
      Signed-off-by: NXin Long <lucien.xin@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      28aa4c26
  7. 10 7月, 2016 1 次提交
  8. 09 7月, 2016 1 次提交
  9. 06 7月, 2016 3 次提交
  10. 05 7月, 2016 2 次提交
  11. 04 7月, 2016 3 次提交
  12. 03 7月, 2016 1 次提交
  13. 02 7月, 2016 2 次提交
  14. 30 6月, 2016 8 次提交
    • A
      tcp: add an ability to dump and restore window parameters · b1ed4c4f
      Andrey Vagin 提交于
      We found that sometimes a restored tcp socket doesn't work.
      
      A reason of this bug is incorrect window parameters and in this case
      tcp_acceptable_seq() returns tcp_wnd_end(tp) instead of tp->snd_nxt. The
      other side drops packets with this seq, because seq is less than
      tp->rcv_nxt ( tcp_sequence() ).
      
      Data from a send queue is sent only if there is enough space in a
      window, so when we restore unacked data, we need to expand a window to
      fit this data.
      
      This was in a first version of this patch:
      "tcp: extend window to fit all restored unacked data in a send queue"
      
      Then Alexey recommended me to restore window parameters instead of
      adjusted them according with data in a sent queue. This sounds resonable.
      
      rcv_wnd has to be restored, because it was reported to another side
      and the offered window is never shrunk.
      One of reasons why we need to restore snd_wnd was described above.
      
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
      Cc: James Morris <jmorris@namei.org>
      Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
      Cc: Patrick McHardy <kaber@trash.net>
      Signed-off-by: NAndrey Vagin <avagin@openvz.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b1ed4c4f
    • M
      fuse: serialize dirops by default · 5c672ab3
      Miklos Szeredi 提交于
      Negotiate with userspace filesystems whether they support parallel readdir
      and lookup.  Disable parallelism by default for fear of breaking fuse
      filesystems.
      Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com>
      Fixes: 9902af79 ("parallel lookups: actual switch to rwsem")
      Fixes: d9b3dbdc ("fuse: switch to ->iterate_shared()")
      5c672ab3
    • N
      net: bridge: add support for IGMP/MLD stats and export them via netlink · 1080ab95
      Nikolay Aleksandrov 提交于
      This patch adds stats support for the currently used IGMP/MLD types by the
      bridge. The stats are per-port (plus one stat per-bridge) and per-direction
      (RX/TX). The stats are exported via netlink via the new linkxstats API
      (RTM_GETSTATS). In order to minimize the performance impact, a new option
      is used to enable/disable the stats - multicast_stats_enabled, similar to
      the recent vlan stats. Also in order to avoid multiple IGMP/MLD type
      lookups and checks, we make use of the current "igmp" member of the bridge
      private skb->cb region to record the type on Rx (both host-generated and
      external packets pass by multicast_rcv()). We can do that since the igmp
      member was used as a boolean and all the valid IGMP/MLD types are positive
      values. The normal bridge fast-path is not affected at all, the only
      affected paths are the flooding ones and since we make use of the IGMP/MLD
      type, we can quickly determine if the packet should be counted using
      cache-hot data (cb's igmp member). We add counters for:
      * IGMP Queries
      * IGMP Leaves
      * IGMP v1/v2/v3 reports
      
      * MLD Queries
      * MLD Leaves
      * MLD v1/v2 reports
      
      These are invaluable when monitoring or debugging complex multicast setups
      with bridges.
      Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1080ab95
    • N
      net: rtnetlink: add support for the IFLA_STATS_LINK_XSTATS_SLAVE attribute · 80e73cc5
      Nikolay Aleksandrov 提交于
      This patch adds support for the IFLA_STATS_LINK_XSTATS_SLAVE attribute
      which allows to export per-slave statistics if the master device supports
      the linkxstats callback. The attribute is passed down to the linkxstats
      callback and it is up to the callback user to use it (an example has been
      added to the only current user - the bridge). This allows us to query only
      specific slaves of master devices like bridge ports and export only what
      we're interested in instead of having to dump all ports and searching only
      for a single one. This will be used to export per-port IGMP/MLD stats and
      also per-port vlan stats in the future, possibly other statistics as well.
      Signed-off-by: NNikolay Aleksandrov <nikolay@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      80e73cc5
    • D
      bpf: add bpf_skb_change_type helper · d2485c42
      Daniel Borkmann 提交于
      This work adds a helper for changing skb->pkt_type in a controlled way.
      We only allow a subset of possible values and can extend that in future
      should other use cases come up. Doing this as a helper has the advantage
      that errors can be handeled gracefully and thus helper kept extensible.
      
      It's a write counterpart to pkt_type member we can already read from
      struct __sk_buff context. Major use case is to change incoming skbs to
      PACKET_HOST in a programmatic way instead of having to recirculate via
      redirect(..., BPF_F_INGRESS), for example.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d2485c42
    • D
      bpf: add bpf_skb_change_proto helper · 6578171a
      Daniel Borkmann 提交于
      This patch adds a minimal helper for doing the groundwork of changing
      the skb->protocol in a controlled way. Currently supported is v4 to
      v6 and vice versa transitions, which allows f.e. for a minimal, static
      nat64 implementation where applications in containers that still
      require IPv4 can be transparently operated in an IPv6-only environment.
      For example, host facing veth of the container can transparently do
      the transitions in a programmatic way with the help of clsact qdisc
      and cls_bpf.
      
      Idea is to separate concerns for keeping complexity of the helper
      lower, which means that the programs utilize bpf_skb_change_proto(),
      bpf_skb_store_bytes() and bpf_lX_csum_replace() to get the job done,
      instead of doing everything in a single helper (and thus partially
      duplicating helper functionality). Also, bpf_skb_change_proto()
      shouldn't need to deal with raw packet data as this is done by other
      helpers.
      
      bpf_skb_proto_6_to_4() and bpf_skb_proto_4_to_6() unclone the skb to
      operate on a private one, push or pop additionally required header
      space and migrate the gso/gro meta data from the shared info. We do
      mark the gso type as dodgy so that headers are checked and segs
      recalculated by the gso/gro engine. The gso_size target is adapted
      as well. The flags argument added is currently reserved and can be
      used for future extensions.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6578171a
    • D
      bpf, trace: add BPF_F_CURRENT_CPU flag for bpf_perf_event_read · 6816a7ff
      Daniel Borkmann 提交于
      Follow-up commit to 1e33759c ("bpf, trace: add BPF_F_CURRENT_CPU
      flag for bpf_perf_event_output") to add the same functionality into
      bpf_perf_event_read() helper. The split of index into flags and index
      component is also safe here, since such large maps are rejected during
      map allocation time.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6816a7ff
    • D
      Input: add SW_PEN_INSERTED define · 9a9b6aa6
      Douglas Anderson 提交于
      Some devices with a pen may have a switch that can be used to detect
      when the pen is inserted or removed to a slot on the device.  Let's add
      a define to the input event codes so that everyone can be on the same
      page for what event we should generate when the pen is inserted or
      removed.
      
      In general the pen switch could be used by the software on the device to
      kick off any number of actions when the pen is inserted or removed.
      Signed-off-by: NDouglas Anderson <dianders@chromium.org>
      Signed-off-by: NDmitry Torokhov <dmitry.torokhov@gmail.com>
      9a9b6aa6
  15. 28 6月, 2016 1 次提交
  16. 27 6月, 2016 1 次提交
  17. 24 6月, 2016 2 次提交
  18. 23 6月, 2016 1 次提交
    • W
      openvswitch: Add packet len info to upcall. · b95e5928
      William Tu 提交于
      The commit f2a4d086 ("openvswitch: Add packet truncation support.")
      introduces packet truncation before sending to userspace upcall receiver.
      This patch passes up the skb->len before truncation so that the upcall
      receiver knows the original packet size. Potentially this will be used
      by sFlow, where OVS translates sFlow config header=N to a sample action,
      truncating packet to N byte in kernel datapath. Thus, only N bytes instead
      of full-packet size is copied from kernel to userspace, saving the
      kernel-to-userspace bandwidth.
      Signed-off-by: NWilliam Tu <u9012063@gmail.com>
      Cc: Pravin Shelar <pshelar@nicira.com>
      Acked-by: NPravin B Shelar <pshelar@ovn.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b95e5928
  19. 19 6月, 2016 3 次提交