1. 14 2月, 2018 3 次提交
  2. 13 2月, 2018 29 次提交
  3. 12 2月, 2018 1 次提交
    • L
      vfs: do bulk POLL* -> EPOLL* replacement · a9a08845
      Linus Torvalds 提交于
      This is the mindless scripted replacement of kernel use of POLL*
      variables as described by Al, done by this script:
      
          for V in IN OUT PRI ERR RDNORM RDBAND WRNORM WRBAND HUP RDHUP NVAL MSG; do
              L=`git grep -l -w POLL$V | grep -v '^t' | grep -v /um/ | grep -v '^sa' | grep -v '/poll.h$'|grep -v '^D'`
              for f in $L; do sed -i "-es/^\([^\"]*\)\(\<POLL$V\>\)/\\1E\\2/" $f; done
          done
      
      with de-mangling cleanups yet to come.
      
      NOTE! On almost all architectures, the EPOLL* constants have the same
      values as the POLL* constants do.  But they keyword here is "almost".
      For various bad reasons they aren't the same, and epoll() doesn't
      actually work quite correctly in some cases due to this on Sparc et al.
      
      The next patch from Al will sort out the final differences, and we
      should be all done.
      Scripted-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a9a08845
  4. 10 2月, 2018 1 次提交
    • A
      sctp: verify size of a new chunk in _sctp_make_chunk() · 07f2c7ab
      Alexey Kodanev 提交于
      When SCTP makes INIT or INIT_ACK packet the total chunk length
      can exceed SCTP_MAX_CHUNK_LEN which leads to kernel panic when
      transmitting these packets, e.g. the crash on sending INIT_ACK:
      
      [  597.804948] skbuff: skb_over_panic: text:00000000ffae06e4 len:120168
                     put:120156 head:000000007aa47635 data:00000000d991c2de
                     tail:0x1d640 end:0xfec0 dev:<NULL>
      ...
      [  597.976970] ------------[ cut here ]------------
      [  598.033408] kernel BUG at net/core/skbuff.c:104!
      [  600.314841] Call Trace:
      [  600.345829]  <IRQ>
      [  600.371639]  ? sctp_packet_transmit+0x2095/0x26d0 [sctp]
      [  600.436934]  skb_put+0x16c/0x200
      [  600.477295]  sctp_packet_transmit+0x2095/0x26d0 [sctp]
      [  600.540630]  ? sctp_packet_config+0x890/0x890 [sctp]
      [  600.601781]  ? __sctp_packet_append_chunk+0x3b4/0xd00 [sctp]
      [  600.671356]  ? sctp_cmp_addr_exact+0x3f/0x90 [sctp]
      [  600.731482]  sctp_outq_flush+0x663/0x30d0 [sctp]
      [  600.788565]  ? sctp_make_init+0xbf0/0xbf0 [sctp]
      [  600.845555]  ? sctp_check_transmitted+0x18f0/0x18f0 [sctp]
      [  600.912945]  ? sctp_outq_tail+0x631/0x9d0 [sctp]
      [  600.969936]  sctp_cmd_interpreter.isra.22+0x3be1/0x5cb0 [sctp]
      [  601.041593]  ? sctp_sf_do_5_1B_init+0x85f/0xc30 [sctp]
      [  601.104837]  ? sctp_generate_t1_cookie_event+0x20/0x20 [sctp]
      [  601.175436]  ? sctp_eat_data+0x1710/0x1710 [sctp]
      [  601.233575]  sctp_do_sm+0x182/0x560 [sctp]
      [  601.284328]  ? sctp_has_association+0x70/0x70 [sctp]
      [  601.345586]  ? sctp_rcv+0xef4/0x32f0 [sctp]
      [  601.397478]  ? sctp6_rcv+0xa/0x20 [sctp]
      ...
      
      Here the chunk size for INIT_ACK packet becomes too big, mostly
      because of the state cookie (INIT packet has large size with
      many address parameters), plus additional server parameters.
      
      Later this chunk causes the panic in skb_put_data():
      
        skb_packet_transmit()
            sctp_packet_pack()
                skb_put_data(nskb, chunk->skb->data, chunk->skb->len);
      
      'nskb' (head skb) was previously allocated with packet->size
      from u16 'chunk->chunk_hdr->length'.
      
      As suggested by Marcelo we should check the chunk's length in
      _sctp_make_chunk() before trying to allocate skb for it and
      discard a chunk if its size bigger than SCTP_MAX_CHUNK_LEN.
      Signed-off-by: NAlexey Kodanev <alexey.kodanev@oracle.com>
      Acked-by: NMarcelo Ricardo Leitner <marcelo.leinter@gmail.com>
      Acked-by: NNeil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      07f2c7ab
  5. 09 2月, 2018 6 次提交
    • T
      SUNRPC: Don't call __UDPX_INC_STATS() from a preemptible context · 0afa6b44
      Trond Myklebust 提交于
      Calling __UDPX_INC_STATS() from a preemptible context leads to a
      warning of the form:
      
       BUG: using __this_cpu_add() in preemptible [00000000] code: kworker/u5:0/31
       caller is xs_udp_data_receive_workfn+0x194/0x270
       CPU: 1 PID: 31 Comm: kworker/u5:0 Not tainted 4.15.0-rc8-00076-g90ea9f1b #2
       Workqueue: xprtiod xs_udp_data_receive_workfn
       Call Trace:
        dump_stack+0x85/0xc1
        check_preemption_disabled+0xce/0xe0
        xs_udp_data_receive_workfn+0x194/0x270
        process_one_work+0x318/0x620
        worker_thread+0x20a/0x390
        ? process_one_work+0x620/0x620
        kthread+0x120/0x130
        ? __kthread_bind_mask+0x60/0x60
        ret_from_fork+0x24/0x30
      
      Since we're taking a spinlock in those functions anyway, let's fix the
      issue by moving the call so that it occurs under the spinlock.
      Reported-by: Nkernel test robot <fengguang.wu@intel.com>
      Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
      0afa6b44
    • O
      fix parallelism for rpc tasks · f515f86b
      Olga Kornievskaia 提交于
      Hi folks,
      
      On a multi-core machine, is it expected that we can have parallel RPCs
      handled by each of the per-core workqueue?
      
      In testing a read workload, observing via "top" command that a single
      "kworker" thread is running servicing the requests (no parallelism).
      It's more prominent while doing these operations over krb5p mount.
      
      What has been suggested by Bruce is to try this and in my testing I
      see then the read workload spread among all the kworker threads.
      Signed-off-by: NOlga Kornievskaia <kolga@netapp.com>
      Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
      f515f86b
    • H
      tipc: fix skb truesize/datasize ratio control · 55b3280d
      Hoang Le 提交于
      In commit d618d09a ("tipc: enforce valid ratio between skb truesize
      and contents") we introduced a test for ensuring that the condition
      truesize/datasize <= 4 is true for a received buffer. Unfortunately this
      test has two problems.
      
      - Because of the integer arithmetics the test
        if (skb->truesize / buf_roundup_len(skb) > 4) will miss all
        ratios [4 < ratio < 5], which was not the intention.
      - The buffer returned by skb_copy() inherits skb->truesize of the
        original buffer, which doesn't help the situation at all.
      
      In this commit, we change the ratio condition and replace skb_copy()
      with a call to skb_copy_expand() to finally get this right.
      Acked-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      55b3280d
    • I
      net/sched: cls_u32: fix cls_u32 on filter replace · eb53f7af
      Ivan Vecera 提交于
      The following sequence is currently broken:
      
       # tc qdisc add dev foo ingress
       # tc filter replace dev foo protocol all ingress \
         u32 match u8 0 0 action mirred egress mirror dev bar1
       # tc filter replace dev foo protocol all ingress \
         handle 800::800 pref 49152 \
         u32 match u8 0 0 action mirred egress mirror dev bar2
       Error: cls_u32: Key node flags do not match passed flags.
       We have an error talking to the kernel, -1
      
      The error comes from u32_change() when comparing new and
      existing flags. The existing ones always contains one of
      TCA_CLS_FLAGS_{,NOT}_IN_HW flag depending on offloading state.
      These flags cannot be passed from userspace so the condition
      (n->flags != flags) in u32_change() always fails.
      
      Fix the condition so the flags TCA_CLS_FLAGS_NOT_IN_HW and
      TCA_CLS_FLAGS_IN_HW are not taken into account.
      
      Fixes: 24d3dc6d ("net/sched: cls_u32: Reflect HW offload status")
      Signed-off-by: NIvan Vecera <ivecera@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      eb53f7af
    • D
      mpls, nospec: Sanitize array index in mpls_label_ok() · 3968523f
      Dan Williams 提交于
      mpls_label_ok() validates that the 'platform_label' array index from a
      userspace netlink message payload is valid. Under speculation the
      mpls_label_ok() result may not resolve in the CPU pipeline until after
      the index is used to access an array element. Sanitize the index to zero
      to prevent userspace-controlled arbitrary out-of-bounds speculation, a
      precursor for a speculative execution side channel vulnerability.
      
      Cc: <stable@vger.kernel.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3968523f
    • S
      rds: tcp: use rds_destroy_pending() to synchronize netns/module teardown and... · ebeeb1ad
      Sowmini Varadhan 提交于
      rds: tcp: use rds_destroy_pending() to synchronize netns/module teardown and rds connection/workq management
      
      An rds_connection can get added during netns deletion between lines 528
      and 529 of
      
        506 static void rds_tcp_kill_sock(struct net *net)
        :
        /* code to pull out all the rds_connections that should be destroyed */
        :
        528         spin_unlock_irq(&rds_tcp_conn_lock);
        529         list_for_each_entry_safe(tc, _tc, &tmp_list, t_tcp_node)
        530                 rds_conn_destroy(tc->t_cpath->cp_conn);
      
      Such an rds_connection would miss out the rds_conn_destroy()
      loop (that cancels all pending work) and (if it was scheduled
      after netns deletion) could trigger the use-after-free.
      
      A similar race-window exists for the module unload path
      in rds_tcp_exit -> rds_tcp_destroy_conns
      
      Concurrency with netns deletion (rds_tcp_kill_sock()) must be handled
      by checking check_net() before enqueuing new work or adding new
      connections.
      
      Concurrency with module-unload is handled by maintaining a module
      specific flag that is set at the start of the module exit function,
      and must be checked before enqueuing new work or adding new connections.
      
      This commit refactors existing RDS_DESTROY_PENDING checks added by
      commit 3db6e0d1 ("rds: use RCU to synchronize work-enqueue with
      connection teardown") and consolidates all the concurrency checks
      listed above into the function rds_destroy_pending().
      Signed-off-by: NSowmini Varadhan <sowmini.varadhan@oracle.com>
      Acked-by: NSantosh Shilimkar <santosh.shilimkar@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ebeeb1ad