1. 09 11月, 2018 1 次提交
  2. 08 11月, 2018 25 次提交
  3. 06 11月, 2018 2 次提交
  4. 05 11月, 2018 1 次提交
  5. 04 11月, 2018 5 次提交
    • X
      sctp: define SCTP_SS_DEFAULT for Stream schedulers · 12480e3b
      Xin Long 提交于
      According to rfc8260#section-4.3.2, SCTP_SS_DEFAULT is required to
      defined as SCTP_SS_FCFS or SCTP_SS_RR.
      
      SCTP_SS_FCFS is used for SCTP_SS_DEFAULT's value in this patch.
      
      Fixes: 5bbbbe32 ("sctp: introduce stream scheduler foundations")
      Reported-by: NJianwen Ji <jiji@redhat.com>
      Signed-off-by: NXin Long <lucien.xin@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      12480e3b
    • X
      sctp: fix strchange_flags name for Stream Change Event · fd82d61b
      Xin Long 提交于
      As defined in rfc6525#section-6.1.3, SCTP_STREAM_CHANGE_DENIED
      and SCTP_STREAM_CHANGE_FAILED should be used instead of
      SCTP_ASSOC_CHANGE_DENIED and SCTP_ASSOC_CHANGE_FAILED.
      
      To keep the compatibility, fix it by adding two macros.
      
      Fixes: b444153f ("sctp: add support for generating add stream change event notification")
      Reported-by: NJianwen Ji <jiji@redhat.com>
      Signed-off-by: NXin Long <lucien.xin@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fd82d61b
    • E
      net: bql: add __netdev_tx_sent_queue() · 3e59020a
      Eric Dumazet 提交于
      When qdisc_run() tries to use BQL budget to bulk-dequeue a batch
      of packets, GSO can later transform this list in another list
      of skbs, and each skb is sent to device ndo_start_xmit(),
      one at a time, with skb->xmit_more being set to one but
      for last skb.
      
      Problem is that very often, BQL limit is hit in the middle of
      the packet train, forcing dev_hard_start_xmit() to stop the
      bulk send and requeue the end of the list.
      
      BQL role is to avoid head of line blocking, making sure
      a qdisc can deliver high priority packets before low priority ones.
      
      But there is no way requeued packets can be bypassed by fresh
      packets in the qdisc.
      
      Aborting the bulk send increases TX softirqs, and hot cache
      lines (after skb_segment()) are wasted.
      
      Note that for TSO packets, we never split a packet in the middle
      because of BQL limit being hit.
      
      Drivers should be able to update BQL counters without
      flipping/caring about BQL status, if the current skb
      has xmit_more set.
      
      Upper layers are ultimately responsible to stop sending another
      packet train when BQL limit is hit.
      
      Code template in a driver might look like the following :
      
      	send_doorbell = __netdev_tx_sent_queue(tx_queue, nr_bytes, skb->xmit_more);
      
      Note that __netdev_tx_sent_queue() use is not mandatory,
      since following patch will change dev_hard_start_xmit()
      to not care about BQL status.
      
      But it is highly recommended so that xmit_more full benefits
      can be reached (less doorbells sent, and less atomic operations as well)
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3e59020a
    • M
      mm, thp: consolidate THP gfp handling into alloc_hugepage_direct_gfpmask · 89c83fb5
      Michal Hocko 提交于
      THP allocation mode is quite complex and it depends on the defrag mode.
      This complexity is hidden in alloc_hugepage_direct_gfpmask from a large
      part currently. The NUMA special casing (namely __GFP_THISNODE) is
      however independent and placed in alloc_pages_vma currently. This both
      adds an unnecessary branch to all vma based page allocation requests and
      it makes the code more complex unnecessarily as well. Not to mention
      that e.g. shmem THP used to do the node reclaiming unconditionally
      regardless of the defrag mode until recently. This was not only
      unexpected behavior but it was also hardly a good default behavior and I
      strongly suspect it was just a side effect of the code sharing more than
      a deliberate decision which suggests that such a layering is wrong.
      
      Get rid of the thp special casing from alloc_pages_vma and move the
      logic to alloc_hugepage_direct_gfpmask. __GFP_THISNODE is applied to the
      resulting gfp mask only when the direct reclaim is not requested and
      when there is no explicit numa binding to preserve the current logic.
      
      Please note that there's also a slight difference wrt MPOL_BIND now. The
      previous code would avoid using __GFP_THISNODE if the local node was
      outside of policy_nodemask(). After this patch __GFP_THISNODE is avoided
      for all MPOL_BIND policies. So there's a difference that if local node
      is actually allowed by the bind policy's nodemask, previously
      __GFP_THISNODE would be added, but now it won't be. From the behavior
      POV this is still correct because the policy nodemask is used.
      
      Link: http://lkml.kernel.org/r/20180925120326.24392-3-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Alex Williamson <alex.williamson@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Stefan Priebe - Profihost AG <s.priebe@profihost.ag>
      Cc: Zi Yan <zi.yan@cs.rutgers.edu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      89c83fb5
    • S
      include/linux/notifier.h: SRCU: fix ctags · 94e297c5
      Sam Protsenko 提交于
      ctags indexing ("make tags" command) throws this warning:
      
          ctags: Warning: include/linux/notifier.h:125:
          null expansion of name pattern "\1"
      
      This is the result of DEFINE_PER_CPU() macro expansion.  Fix that by
      getting rid of line break.
      
      Similar fix was already done in commit 25528213 ("tags: Fix
      DEFINE_PER_CPU expansions"), but this one probably wasn't noticed.
      
      Link: http://lkml.kernel.org/r/20181030202808.28027-1-semen.protsenko@linaro.org
      Fixes: 9c80172b ("kernel/SRCU: provide a static initializer")
      Signed-off-by: NSam Protsenko <semen.protsenko@linaro.org>
      Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      94e297c5
  6. 03 11月, 2018 4 次提交
  7. 02 11月, 2018 2 次提交
    • D
      blkcg: revert blkcg cleanups series · b5f2954d
      Dennis Zhou 提交于
      This reverts a series committed earlier due to null pointer exception
      bug report in [1]. It seems there are edge case interactions that I did
      not consider and will need some time to understand what causes the
      adverse interactions.
      
      The original series can be found in [2] with a follow up series in [3].
      
      [1] https://www.spinics.net/lists/cgroups/msg20719.html
      [2] https://lore.kernel.org/lkml/20180911184137.35897-1-dennisszhou@gmail.com/
      [3] https://lore.kernel.org/lkml/20181020185612.51587-1-dennis@kernel.org/
      
      This reverts the following commits:
      d459d853, b2c3fa54, 101246ec, b3b9f24f, e2b09899,
      f0fcb3ec, c839e7a0, bdc24917, 74b7c02a, 5bf9a1f3,
      a7b39b4e, 07b05bcc, 49f4c2dc, 27e6fa99Signed-off-by: NDennis Zhou <dennis@kernel.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      b5f2954d
    • P
      SUNRPC: Use atomic(64)_t for seq_send(64) · c3be6577
      Paul Burton 提交于
      The seq_send & seq_send64 fields in struct krb5_ctx are used as
      atomically incrementing counters. This is implemented using cmpxchg() &
      cmpxchg64() to implement what amount to custom versions of
      atomic_fetch_inc() & atomic64_fetch_inc().
      
      Besides the duplication, using cmpxchg64() has another major drawback in
      that some 32 bit architectures don't provide it. As such commit
      571ed1fd ("SUNRPC: Replace krb5_seq_lock with a lockless scheme")
      resulted in build failures for some architectures.
      
      Change seq_send to be an atomic_t and seq_send64 to be an atomic64_t,
      then use atomic(64)_* functions to manipulate the values. The atomic64_t
      type & associated functions are provided even on architectures which
      lack real 64 bit atomic memory access via CONFIG_GENERIC_ATOMIC64 which
      uses spinlocks to serialize access. This fixes the build failures for
      architectures lacking cmpxchg64().
      
      A potential alternative that was raised would be to provide cmpxchg64()
      on the 32 bit architectures that currently lack it, using spinlocks.
      However this would provide a version of cmpxchg64() with semantics a
      little different to the implementations on architectures with real 64
      bit atomics - the spinlock-based implementation would only work if all
      access to the memory used with cmpxchg64() is *always* performed using
      cmpxchg64(). That is not currently a requirement for users of
      cmpxchg64(), and making it one seems questionable. As such avoiding
      cmpxchg64() outside of architecture-specific code seems best,
      particularly in cases where atomic64_t seems like a better fit anyway.
      
      The CONFIG_GENERIC_ATOMIC64 implementation of atomic64_* functions will
      use spinlocks & so faces the same issue, but with the key difference
      that the memory backing an atomic64_t ought to always be accessed via
      the atomic64_* functions anyway making the issue moot.
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Fixes: 571ed1fd ("SUNRPC: Replace krb5_seq_lock with a lockless scheme")
      Cc: Trond Myklebust <trond.myklebust@hammerspace.com>
      Cc: Anna Schumaker <anna.schumaker@netapp.com>
      Cc: J. Bruce Fields <bfields@fieldses.org>
      Cc: Jeff Layton <jlayton@kernel.org>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: linux-nfs@vger.kernel.org
      Cc: netdev@vger.kernel.org
      Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com>
      c3be6577