1. 06 1月, 2022 1 次提交
  2. 11 12月, 2021 1 次提交
  3. 21 11月, 2021 1 次提交
  4. 07 11月, 2021 5 次提交
    • S
      mm: remove HARDENED_USERCOPY_FALLBACK · 53944f17
      Stephen Kitt 提交于
      This has served its purpose and is no longer used.  All usercopy
      violations appear to have been handled by now, any remaining instances
      (or new bugs) will cause copies to be rejected.
      
      This isn't a direct revert of commit 2d891fbc ("usercopy: Allow
      strict enforcement of whitelists"); since usercopy_fallback is
      effectively 0, the fallback handling is removed too.
      
      This also removes the usercopy_fallback module parameter on slab_common.
      
      Link: https://github.com/KSPP/linux/issues/153
      Link: https://lkml.kernel.org/r/20210921061149.1091163-1-steve@sk2.orgSigned-off-by: NStephen Kitt <steve@sk2.org>
      Suggested-by: NKees Cook <keescook@chromium.org>
      Acked-by: NKees Cook <keescook@chromium.org>
      Reviewed-by: Joel Stanley <joel@jms.id.au>	[defconfig change]
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: James Morris <jmorris@namei.org>
      Cc: "Serge E . Hallyn" <serge@hallyn.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      53944f17
    • H
      mm, slub: use prefetchw instead of prefetch · 04b4b006
      Hyeonggon Yoo 提交于
      Commit 0ad9500e ("slub: prefetch next freelist pointer in
      slab_alloc()") introduced prefetch_freepointer() because when other
      cpu(s) freed objects into a page that current cpu owns, the freelist
      link is hot on cpu(s) which freed objects and possibly very cold on
      current cpu.
      
      But if freelist link chain is hot on cpu(s) which freed objects, it's
      better to invalidate that chain because they're not going to access
      again within a short time.
      
      So use prefetchw instead of prefetch.  On supported architectures like
      x86 and arm, it invalidates other copied instances of a cache line when
      prefetching it.
      
      Before:
      
      Time: 91.677
      
       Performance counter stats for 'hackbench -g 100 -l 10000':
              1462938.07 msec cpu-clock                 #   15.908 CPUs utilized
                18072550      context-switches          #   12.354 K/sec
                 1018814      cpu-migrations            #  696.416 /sec
                  104558      page-faults               #   71.471 /sec
           1580035699271      cycles                    #    1.080 GHz                      (54.51%)
           2003670016013      instructions              #    1.27  insn per cycle           (54.31%)
              5702204863      branch-misses                                                 (54.28%)
            643368500985      cache-references          #  439.778 M/sec                    (54.26%)
             18475582235      cache-misses              #    2.872 % of all cache refs      (54.28%)
            642206796636      L1-dcache-loads           #  438.984 M/sec                    (46.87%)
             18215813147      L1-dcache-load-misses     #    2.84% of all L1-dcache accesses  (46.83%)
            653842996501      dTLB-loads                #  446.938 M/sec                    (46.63%)
              3227179675      dTLB-load-misses          #    0.49% of all dTLB cache accesses  (46.85%)
            537531951350      iTLB-loads                #  367.433 M/sec                    (54.33%)
               114750630      iTLB-load-misses          #    0.02% of all iTLB cache accesses  (54.37%)
            630135543177      L1-icache-loads           #  430.733 M/sec                    (46.80%)
             22923237620      L1-icache-load-misses     #    3.64% of all L1-icache accesses  (46.76%)
      
            91.964452802 seconds time elapsed
      
            43.416742000 seconds user
          1422.441123000 seconds sys
      
      After:
      
      Time: 90.220
      
       Performance counter stats for 'hackbench -g 100 -l 10000':
              1437418.48 msec cpu-clock                 #   15.880 CPUs utilized
                17694068      context-switches          #   12.310 K/sec
                  958257      cpu-migrations            #  666.651 /sec
                  100604      page-faults               #   69.989 /sec
           1583259429428      cycles                    #    1.101 GHz                      (54.57%)
           2004002484935      instructions              #    1.27  insn per cycle           (54.37%)
              5594202389      branch-misses                                                 (54.36%)
            643113574524      cache-references          #  447.409 M/sec                    (54.39%)
             18233791870      cache-misses              #    2.835 % of all cache refs      (54.37%)
            640205852062      L1-dcache-loads           #  445.386 M/sec                    (46.75%)
             17968160377      L1-dcache-load-misses     #    2.81% of all L1-dcache accesses  (46.79%)
            651747432274      dTLB-loads                #  453.415 M/sec                    (46.59%)
              3127124271      dTLB-load-misses          #    0.48% of all dTLB cache accesses  (46.75%)
            535395273064      iTLB-loads                #  372.470 M/sec                    (54.38%)
               113500056      iTLB-load-misses          #    0.02% of all iTLB cache accesses  (54.35%)
            628871845924      L1-icache-loads           #  437.501 M/sec                    (46.80%)
             22585641203      L1-icache-load-misses     #    3.59% of all L1-icache accesses  (46.79%)
      
            90.514819303 seconds time elapsed
      
            43.877656000 seconds user
          1397.176001000 seconds sys
      
      Link: https://lkml.org/lkml/2021/10/8/598=20
      Link: https://lkml.kernel.org/r/20211011144331.70084-1-42.hyeyoo@gmail.comSigned-off-by: NHyeonggon Yoo <42.hyeyoo@gmail.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      04b4b006
    • V
      mm/slub: increase default cpu partial list sizes · 23e98ad1
      Vlastimil Babka 提交于
      The defaults are determined based on object size and can go up to 30 for
      objects smaller than 256 bytes.  Before the previous patch changed the
      accounting, this could have made cpu partial list contain up to 30
      pages.  After that patch, only up to 2 pages with default allocation
      order.
      
      Very short lists limit the usefulness of the whole concept of cpu
      partial lists, so this patch aims at a more reasonable default under the
      new accounting.  The defaults are quadrupled, except for object size >=
      PAGE_SIZE where it's doubled.  This makes the lists grow up to 10 pages
      in practice.
      
      A quick test of booting a kernel under virtme with 4GB RAM and 8 vcpus
      shows the following slab memory usage after boot:
      
      Before previous patch (using page->pobjects):
        Slab:              36732 kB
        SReclaimable:      14836 kB
        SUnreclaim:        21896 kB
      
      After previous patch (using page->pages):
        Slab:              34720 kB
        SReclaimable:      13716 kB
        SUnreclaim:        21004 kB
      
      After this patch (using page->pages, higher defaults):
        Slab:              35252 kB
        SReclaimable:      13944 kB
        SUnreclaim:        21308 kB
      
      In the same setup, I also ran 5 times:
      
          hackbench -l 16000 -g 16
      
      Differences in time were in the noise, we can compare slub stats as
      given by slabinfo -r skbuff_head_cache (the other cache heavily used by
      hackbench, kmalloc-cg-512 looks similar).  Negligible stats left out for
      brevity.
      
      Before previous patch (using page->pobjects):
      
        Objects: 1408, Memory Total:  401408 Used :  304128
      
        Slab Perf Counter       Alloc     Free %Al %Fr
        --------------------------------------------------
        Fastpath             469952498  5946606  91   1
        Slowpath             42053573 506059465   8  98
        Page Alloc              41093    41044   0   0
        Add partial                18 21229327   0   4
        Remove partial       20039522    36051   3   0
        Cpu partial list      4686640 24767229   0   4
        RemoteObj/SlabFrozen       16 124027841   0  24
        Total                512006071 512006071
        Flushes       18
      
        Slab Deactivation             Occurrences %
        -------------------------------------------------
        Slab empty                       4993    0%
        Deactivation bypass           24767229   99%
        Refilled from foreign frees   21972674   88%
      
      After previous patch (using page->pages):
      
        Objects: 480, Memory Total:  131072 Used :  103680
      
        Slab Perf Counter       Alloc     Free %Al %Fr
        --------------------------------------------------
        Fastpath             473016294  5405653  92   1
        Slowpath             38989777 506600418   7  98
        Page Alloc              32717    32701   0   0
        Add partial                 3 22749164   0   4
        Remove partial       11371127    32474   2   0
        Cpu partial list     11686226 23090059   2   4
        RemoteObj/SlabFrozen        2 67541803   0  13
        Total                512006071 512006071
        Flushes        3
      
        Slab Deactivation             Occurrences %
        -------------------------------------------------
        Slab empty                        227    0%
        Deactivation bypass           23090059   99%
        Refilled from foreign frees   27585695  119%
      
      After this patch (using page->pages, higher defaults):
      
        Objects: 896, Memory Total:  229376 Used :  193536
      
        Slab Perf Counter       Alloc     Free %Al %Fr
        --------------------------------------------------
        Fastpath             473799295  4980278  92   0
        Slowpath             38206776 507025793   7  99
        Page Alloc              32295    32267   0   0
        Add partial                11 23291143   0   4
        Remove partial        5815764    31278   1   0
        Cpu partial list     18119280 23967320   3   4
        RemoteObj/SlabFrozen       10 76974794   0  15
        Total                512006071 512006071
        Flushes       11
      
        Slab Deactivation             Occurrences %
        -------------------------------------------------
        Slab empty                        989    0%
        Deactivation bypass           23967320   99%
        Refilled from foreign frees   32358473  135%
      
      As expected, memory usage dropped significantly with change of
      accounting, increasing the defaults increased it, but not as much.  The
      number of page allocation/frees dropped significantly with the new
      accounting, but didn't increase with the higher defaults.
      Interestingly, the number of fasthpath allocations increased, as well as
      allocations from the cpu partial list, even though it's shorter.
      
      Link: https://lkml.kernel.org/r/20211012134651.11258-2-vbabka@suse.czSigned-off-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Roman Gushchin <guro@fb.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      23e98ad1
    • V
      mm, slub: change percpu partial accounting from objects to pages · b47291ef
      Vlastimil Babka 提交于
      With CONFIG_SLUB_CPU_PARTIAL enabled, SLUB keeps a percpu list of
      partial slabs that can be promoted to cpu slab when the previous one is
      depleted, without accessing the shared partial list.  A slab can be
      added to this list by 1) refill of an empty list from get_partial_node()
      - once we really have to access the shared partial list, we acquire
      multiple slabs to amortize the cost of locking, and 2) first free to a
      previously full slab - instead of putting the slab on a shared partial
      list, we can more cheaply freeze it and put it on the per-cpu list.
      
      To control how large a percpu partial list can grow for a kmem cache,
      set_cpu_partial() calculates a target number of free objects on each
      cpu's percpu partial list, and this can be also set by the sysfs file
      cpu_partial.
      
      However, the tracking of actual number of objects is imprecise, in order
      to limit overhead from cpu X freeing an objects to a slab on percpu
      partial list of cpu Y.  Basically, the percpu partial slabs form a
      single linked list, and when we add a new slab to the list with current
      head "oldpage", we set in the struct page of the slab we're adding:
      
          page->pages = oldpage->pages + 1; // this is precise
          page->pobjects = oldpage->pobjects + (page->objects - page->inuse);
          page->next = oldpage;
      
      Thus the real number of free objects in the slab (objects - inuse) is
      only determined at the moment of adding the slab to the percpu partial
      list, and further freeing doesn't update the pobjects counter nor
      propagate it to the current list head.  As Jann reports [1], this can
      easily lead to large inaccuracies, where the target number of objects
      (up to 30 by default) can translate to the same number of (empty) slab
      pages on the list.  In case 2) above, we put a slab with 1 free object
      on the list, thus only increase page->pobjects by 1, even if there are
      subsequent frees on the same slab.  Jann has noticed this in practice
      and so did we [2] when investigating significant increase of kmemcg
      usage after switching from SLAB to SLUB.
      
      While this is no longer a problem in kmemcg context thanks to the
      accounting rewrite in 5.9, the memory waste is still not ideal and it's
      questionable whether it makes sense to perform free object count based
      control when object counts can easily become so much inaccurate.  So
      this patch converts the accounting to be based on number of pages only
      (which is precise) and removes the page->pobjects field completely.
      This is also ultimately simpler.
      
      To retain the existing set_cpu_partial() heuristic, first calculate the
      target number of objects as previously, but then convert it to target
      number of pages by assuming the pages will be half-filled on average.
      This assumption might obviously also be inaccurate in practice, but
      cannot degrade to actual number of pages being equal to the target
      number of objects.
      
      We could also skip the intermediate step with target number of objects
      and rewrite the heuristic in terms of pages.  However we still have the
      sysfs file cpu_partial which uses number of objects and could break
      existing users if it suddenly becomes number of pages, so this patch
      doesn't do that.
      
      In practice, after this patch the heuristics limit the size of percpu
      partial list up to 2 pages.  In case of a reported regression (which
      would mean some workload has benefited from the previous imprecise
      object based counting), we can tune the heuristics to get a better
      compromise within the new scheme, while still avoid the unexpectedly
      long percpu partial lists.
      
      [1] https://lore.kernel.org/linux-mm/CAG48ez2Qx5K1Cab-m8BdSibp6wLTip6ro4=-umR7BLsEgjEYzA@mail.gmail.com/
      [2] https://lore.kernel.org/all/2f0f46e8-2535-410a-1859-e9cfa4e57c18@suse.cz/
      
      ==========
      Evaluation
      ==========
      
      Mel was kind enough to run v1 through mmtests machinery for netperf
      (localhost) and hackbench and, for most significant results see below.
      So there are some apparent regressions, especially with hackbench, which
      I think ultimately boils down to having shorter percpu partial lists on
      average and some benchmarks benefiting from longer ones.  Monitoring
      slab usage also indicated less memory usage by slab.  Based on that, the
      following patch will bump the defaults to allow longer percpu partial
      lists than after this patch.
      
      However the goal is certainly not such that we would limit the percpu
      partial lists to 30 pages just because previously a specific alloc/free
      pattern could lead to the limit of 30 objects translate to a limit to 30
      pages - that would make little sense.  This is a correctness patch, and
      if a workload benefits from larger lists, the sysfs tuning knobs are
      still there to allow that.
      
      Netperf
      
        2-socket Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz (20 cores, 40 threads per socket), 384GB RAM
        TCP-RR:
          hmean before 127045.79 after 121092.94 (-4.69%, worse)
          stddev before  2634.37 after   1254.08
        UDP-RR:
          hmean before 166985.45 after 160668.94 ( -3.78%, worse)
          stddev before 4059.69 after 1943.63
      
        2-socket Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz (20 cores, 40 threads per socket), 512GB RAM
        TCP-RR:
          hmean before 84173.25 after 76914.72 ( -8.62%, worse)
        UDP-RR:
          hmean before 93571.12 after 96428.69 ( 3.05%, better)
          stddev before 23118.54 after 16828.14
      
        2-socket Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz (12 cores, 24 threads per socket), 64GB RAM
        TCP-RR:
          hmean before 49984.92 after 48922.27 ( -2.13%, worse)
          stddev before 6248.15 after 4740.51
        UDP-RR:
          hmean before 61854.31 after 68761.81 ( 11.17%, better)
          stddev before 4093.54 after 5898.91
      
        other machines - within 2%
      
      Hackbench
      
        (results before and after the patch, negative % means worse)
      
        2-socket AMD EPYC 7713 (64 cores, 128 threads per core), 256GB RAM
        hackbench-process-sockets
        Amean 	1 	0.5380	0.5583	( -3.78%)
        Amean 	4 	0.7510	0.8150	( -8.52%)
        Amean 	7 	0.7930	0.9533	( -20.22%)
        Amean 	12 	0.7853	1.1313	( -44.06%)
        Amean 	21 	1.1520	1.4993	( -30.15%)
        Amean 	30 	1.6223	1.9237	( -18.57%)
        Amean 	48 	2.6767	2.9903	( -11.72%)
        Amean 	79 	4.0257	5.1150	( -27.06%)
        Amean 	110	5.5193	7.4720	( -35.38%)
        Amean 	141	7.2207	9.9840	( -38.27%)
        Amean 	172	8.4770	12.1963	( -43.88%)
        Amean 	203	9.6473	14.3137	( -48.37%)
        Amean 	234	11.3960	18.7917	( -64.90%)
        Amean 	265	13.9627	22.4607	( -60.86%)
        Amean 	296	14.9163	26.0483	( -74.63%)
      
        hackbench-thread-sockets
        Amean 	1 	0.5597	0.5877	( -5.00%)
        Amean 	4 	0.7913	0.8960	( -13.23%)
        Amean 	7 	0.8190	1.0017	( -22.30%)
        Amean 	12 	0.9560	1.1727	( -22.66%)
        Amean 	21 	1.7587	1.5660	( 10.96%)
        Amean 	30 	2.4477	1.9807	( 19.08%)
        Amean 	48 	3.4573	3.0630	( 11.41%)
        Amean 	79 	4.7903	5.1733	( -8.00%)
        Amean 	110	6.1370	7.4220	( -20.94%)
        Amean 	141	7.5777	9.2617	( -22.22%)
        Amean 	172	9.2280	11.0907	( -20.18%)
        Amean 	203	10.2793	13.3470	( -29.84%)
        Amean 	234	11.2410	17.1070	( -52.18%)
        Amean 	265	12.5970	23.3323	( -85.22%)
        Amean 	296	17.1540	24.2857	( -41.57%)
      
        2-socket Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz (20 cores, 40 threads
        per socket), 384GB RAM
        hackbench-process-sockets
        Amean 	1 	0.5760	0.4793	( 16.78%)
        Amean 	4 	0.9430	0.9707	( -2.93%)
        Amean 	7 	1.5517	1.8843	( -21.44%)
        Amean 	12 	2.4903	2.7267	( -9.49%)
        Amean 	21 	3.9560	4.2877	( -8.38%)
        Amean 	30 	5.4613	5.8343	( -6.83%)
        Amean 	48 	8.5337	9.2937	( -8.91%)
        Amean 	79 	14.0670	15.2630	( -8.50%)
        Amean 	110	19.2253	21.2467	( -10.51%)
        Amean 	141	23.7557	25.8550	( -8.84%)
        Amean 	172	28.4407	29.7603	( -4.64%)
        Amean 	203	33.3407	33.9927	( -1.96%)
        Amean 	234	38.3633	39.1150	( -1.96%)
        Amean 	265	43.4420	43.8470	( -0.93%)
        Amean 	296	48.3680	48.9300	( -1.16%)
      
        hackbench-thread-sockets
        Amean 	1 	0.6080	0.6493	( -6.80%)
        Amean 	4 	1.0000	1.0513	( -5.13%)
        Amean 	7 	1.6607	2.0260	( -22.00%)
        Amean 	12 	2.7637	2.9273	( -5.92%)
        Amean 	21 	5.0613	4.5153	( 10.79%)
        Amean 	30 	6.3340	6.1140	( 3.47%)
        Amean 	48 	9.0567	9.5577	( -5.53%)
        Amean 	79 	14.5657	15.7983	( -8.46%)
        Amean 	110	19.6213	21.6333	( -10.25%)
        Amean 	141	24.1563	26.2697	( -8.75%)
        Amean 	172	28.9687	30.2187	( -4.32%)
        Amean 	203	33.9763	34.6970	( -2.12%)
        Amean 	234	38.8647	39.3207	( -1.17%)
        Amean 	265	44.0813	44.1507	( -0.16%)
        Amean 	296	49.2040	49.4330	( -0.47%)
      
        2-socket Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz (20 cores, 40 threads
        per socket), 512GB RAM
        hackbench-process-sockets
        Amean 	1 	0.5027	0.5017	( 0.20%)
        Amean 	4 	1.1053	1.2033	( -8.87%)
        Amean 	7 	1.8760	2.1820	( -16.31%)
        Amean 	12 	2.9053	3.1810	( -9.49%)
        Amean 	21 	4.6777	4.9920	( -6.72%)
        Amean 	30 	6.5180	6.7827	( -4.06%)
        Amean 	48 	10.0710	10.5227	( -4.48%)
        Amean 	79 	16.4250	17.5053	( -6.58%)
        Amean 	110	22.6203	24.4617	( -8.14%)
        Amean 	141	28.0967	31.0363	( -10.46%)
        Amean 	172	34.4030	36.9233	( -7.33%)
        Amean 	203	40.5933	43.0850	( -6.14%)
        Amean 	234	46.6477	48.7220	( -4.45%)
        Amean 	265	53.0530	53.9597	( -1.71%)
        Amean 	296	59.2760	59.9213	( -1.09%)
      
        hackbench-thread-sockets
        Amean 	1 	0.5363	0.5330	( 0.62%)
        Amean 	4 	1.1647	1.2157	( -4.38%)
        Amean 	7 	1.9237	2.2833	( -18.70%)
        Amean 	12 	2.9943	3.3110	( -10.58%)
        Amean 	21 	4.9987	5.1880	( -3.79%)
        Amean 	30 	6.7583	7.0043	( -3.64%)
        Amean 	48 	10.4547	10.8353	( -3.64%)
        Amean 	79 	16.6707	17.6790	( -6.05%)
        Amean 	110	22.8207	24.4403	( -7.10%)
        Amean 	141	28.7090	31.0533	( -8.17%)
        Amean 	172	34.9387	36.8260	( -5.40%)
        Amean 	203	41.1567	43.0450	( -4.59%)
        Amean 	234	47.3790	48.5307	( -2.43%)
        Amean 	265	53.9543	54.6987	( -1.38%)
        Amean 	296	60.0820	60.2163	( -0.22%)
      
        1-socket Intel(R) Xeon(R) CPU E3-1240 v5 @ 3.50GHz (4 cores, 8 threads),
        32 GB RAM
        hackbench-process-sockets
        Amean 	1 	1.4760	1.5773	( -6.87%)
        Amean 	3 	3.9370	4.0910	( -3.91%)
        Amean 	5 	6.6797	6.9357	( -3.83%)
        Amean 	7 	9.3367	9.7150	( -4.05%)
        Amean 	12	15.7627	16.1400	( -2.39%)
        Amean 	18	23.5360	23.6890	( -0.65%)
        Amean 	24	31.0663	31.3137	( -0.80%)
        Amean 	30	38.7283	39.0037	( -0.71%)
        Amean 	32	41.3417	41.6097	( -0.65%)
      
        hackbench-thread-sockets
        Amean 	1 	1.5250	1.6043	( -5.20%)
        Amean 	3 	4.0897	4.2603	( -4.17%)
        Amean 	5 	6.7760	7.0933	( -4.68%)
        Amean 	7 	9.4817	9.9157	( -4.58%)
        Amean 	12	15.9610	16.3937	( -2.71%)
        Amean 	18	23.9543	24.3417	( -1.62%)
        Amean 	24	31.4400	31.7217	( -0.90%)
        Amean 	30	39.2457	39.5467	( -0.77%)
        Amean 	32	41.8267	42.1230	( -0.71%)
      
        2-socket Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz (12 cores, 24 threads
        per socket), 64GB RAM
        hackbench-process-sockets
        Amean 	1 	1.0347	1.0880	( -5.15%)
        Amean 	4 	1.7267	1.8527	( -7.30%)
        Amean 	7 	2.6707	2.8110	( -5.25%)
        Amean 	12 	4.1617	4.3383	( -4.25%)
        Amean 	21 	7.0070	7.2600	( -3.61%)
        Amean 	30 	9.9187	10.2397	( -3.24%)
        Amean 	48 	15.6710	16.3923	( -4.60%)
        Amean 	79 	24.7743	26.1247	( -5.45%)
        Amean 	110	34.3000	35.9307	( -4.75%)
        Amean 	141	44.2043	44.8010	( -1.35%)
        Amean 	172	54.2430	54.7260	( -0.89%)
        Amean 	192	60.6557	60.9777	( -0.53%)
      
        hackbench-thread-sockets
        Amean 	1 	1.0610	1.1353	( -7.01%)
        Amean 	4 	1.7543	1.9140	( -9.10%)
        Amean 	7 	2.7840	2.9573	( -6.23%)
        Amean 	12 	4.3813	4.4937	( -2.56%)
        Amean 	21 	7.3460	7.5350	( -2.57%)
        Amean 	30 	10.2313	10.5190	( -2.81%)
        Amean 	48 	15.9700	16.5940	( -3.91%)
        Amean 	79 	25.3973	26.6637	( -4.99%)
        Amean 	110	35.1087	36.4797	( -3.91%)
        Amean 	141	45.8220	46.3053	( -1.05%)
        Amean 	172	55.4917	55.7320	( -0.43%)
        Amean 	192	62.7490	62.5410	( 0.33%)
      
      Link: https://lkml.kernel.org/r/20211012134651.11258-1-vbabka@suse.czSigned-off-by: NVlastimil Babka <vbabka@suse.cz>
      Reported-by: NJann Horn <jannh@google.com>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b47291ef
    • K
      slub: add back check for free nonslab objects · d0fe47c6
      Kefeng Wang 提交于
      After commit f227f0fa ("slub: fix unreclaimable slab stat for bulk
      free"), the check for free nonslab page is replaced by VM_BUG_ON_PAGE,
      which only check with CONFIG_DEBUG_VM enabled, but this config may
      impact performance, so it only for debug.
      
      Commit 0937502a ("slub: Add check for kfree() of non slab objects.")
      add the ability, which should be needed in any configs to catch the
      invalid free, they even could be potential issue, eg, memory corruption,
      use after free and double free, so replace VM_BUG_ON_PAGE to
      WARN_ON_ONCE, add object address printing to help use to debug the
      issue.
      
      Link: https://lkml.kernel.org/r/20210930070214.61499-1-wangkefeng.wang@huawei.comSigned-off-by: NKefeng Wang <wangkefeng.wang@huawei.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rienjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d0fe47c6
  5. 27 10月, 2021 1 次提交
  6. 19 10月, 2021 5 次提交
  7. 04 9月, 2021 26 次提交
    • V
      mm, slub: convert kmem_cpu_slab protection to local_lock · bd0e7491
      Vlastimil Babka 提交于
      Embed local_lock into struct kmem_cpu_slab and use the irq-safe versions of
      local_lock instead of plain local_irq_save/restore. On !PREEMPT_RT that's
      equivalent, with better lockdep visibility. On PREEMPT_RT that means better
      preemption.
      
      However, the cost on PREEMPT_RT is the loss of lockless fast paths which only
      work with cpu freelist. Those are designed to detect and recover from being
      preempted by other conflicting operations (both fast or slow path), but the
      slow path operations assume they cannot be preempted by a fast path operation,
      which is guaranteed naturally with disabled irqs. With local locks on
      PREEMPT_RT, the fast paths now also need to take the local lock to avoid races.
      
      In the allocation fastpath slab_alloc_node() we can just defer to the slowpath
      __slab_alloc() which also works with cpu freelist, but under the local lock.
      In the free fastpath do_slab_free() we have to add a new local lock protected
      version of freeing to the cpu freelist, as the existing slowpath only works
      with the page freelist.
      
      Also update the comment about locking scheme in SLUB to reflect changes done
      by this series.
      
      [ Mike Galbraith <efault@gmx.de>: use local_lock() without irq in PREEMPT_RT
        scope; debugging of RT crashes resulting in put_cpu_partial() locking changes ]
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      bd0e7491
    • V
      mm, slub: use migrate_disable() on PREEMPT_RT · 25c00c50
      Vlastimil Babka 提交于
      We currently use preempt_disable() (directly or via get_cpu_ptr()) to stabilize
      the pointer to kmem_cache_cpu. On PREEMPT_RT this would be incompatible with
      the list_lock spinlock. We can use migrate_disable() instead, but that
      increases overhead on !PREEMPT_RT as it's an unconditional function call.
      
      In order to get the best available mechanism on both PREEMPT_RT and
      !PREEMPT_RT, introduce private slub_get_cpu_ptr() and slub_put_cpu_ptr()
      wrappers and use them.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      25c00c50
    • V
      mm, slub: protect put_cpu_partial() with disabled irqs instead of cmpxchg · e0a043aa
      Vlastimil Babka 提交于
      Jann Horn reported [1] the following theoretically possible race:
      
        task A: put_cpu_partial() calls preempt_disable()
        task A: oldpage = this_cpu_read(s->cpu_slab->partial)
        interrupt: kfree() reaches unfreeze_partials() and discards the page
        task B (on another CPU): reallocates page as page cache
        task A: reads page->pages and page->pobjects, which are actually
        halves of the pointer page->lru.prev
        task B (on another CPU): frees page
        interrupt: allocates page as SLUB page and places it on the percpu partial list
        task A: this_cpu_cmpxchg() succeeds
      
        which would cause page->pages and page->pobjects to end up containing
        halves of pointers that would then influence when put_cpu_partial()
        happens and show up in root-only sysfs files. Maybe that's acceptable,
        I don't know. But there should probably at least be a comment for now
        to point out that we're reading union fields of a page that might be
        in a completely different state.
      
      Additionally, the this_cpu_cmpxchg() approach in put_cpu_partial() is only safe
      against s->cpu_slab->partial manipulation in ___slab_alloc() if the latter
      disables irqs, otherwise a __slab_free() in an irq handler could call
      put_cpu_partial() in the middle of ___slab_alloc() manipulating ->partial
      and corrupt it. This becomes an issue on RT after a local_lock is introduced
      in later patch. The fix means taking the local_lock also in put_cpu_partial()
      on RT.
      
      After debugging this issue, Mike Galbraith suggested [2] that to avoid
      different locking schemes on RT and !RT, we can just protect put_cpu_partial()
      with disabled irqs (to be converted to local_lock_irqsave() later) everywhere.
      This should be acceptable as it's not a fast path, and moving the actual
      partial unfreezing outside of the irq disabled section makes it short, and with
      the retry loop gone the code can be also simplified. In addition, the race
      reported by Jann should no longer be possible.
      
      [1] https://lore.kernel.org/lkml/CAG48ez1mvUuXwg0YPH5ANzhQLpbphqk-ZS+jbRz+H66fvm4FcA@mail.gmail.com/
      [2] https://lore.kernel.org/linux-rt-users/e3470ab357b48bccfbd1f5133b982178a7d2befb.camel@gmx.de/Reported-by: NJann Horn <jannh@google.com>
      Suggested-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      e0a043aa
    • V
      mm, slub: make slab_lock() disable irqs with PREEMPT_RT · a2b4ae8b
      Vlastimil Babka 提交于
      We need to disable irqs around slab_lock() (a bit spinlock) to make it
      irq-safe. Most calls to slab_lock() are nested under spin_lock_irqsave() which
      doesn't disable irqs on PREEMPT_RT, so add explicit disabling with PREEMPT_RT.
      The exception is cmpxchg_double_slab() which already disables irqs, so use a
      __slab_[un]lock() variant without irq disable there.
      
      slab_[un]lock() thus needs a flags pointer parameter, which is unused on !RT.
      free_debug_processing() now has two flags variables, which looks odd, but only
      one is actually used - the one used in spin_lock_irqsave() on !RT and the one
      used in slab_lock() on RT.
      
      As a result, __cmpxchg_double_slab() and cmpxchg_double_slab() become
      effectively identical on RT, as both will disable irqs, which is necessary on
      RT as most callers of this function also rely on irqsaving lock operations.
      Thus, assert that irqs are already disabled in __cmpxchg_double_slab() only on
      !RT and also change the VM_BUG_ON assertion to the more standard lockdep_assert
      one.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      a2b4ae8b
    • S
      mm: slub: make object_map_lock a raw_spinlock_t · 94ef0304
      Sebastian Andrzej Siewior 提交于
      The variable object_map is protected by object_map_lock. The lock is always
      acquired in debug code and within already atomic context
      
      Make object_map_lock a raw_spinlock_t.
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      94ef0304
    • S
      mm: slub: move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context · 5a836bf6
      Sebastian Andrzej Siewior 提交于
      flush_all() flushes a specific SLAB cache on each CPU (where the cache
      is present). The deactivate_slab()/__free_slab() invocation happens
      within IPI handler and is problematic for PREEMPT_RT.
      
      The flush operation is not a frequent operation or a hot path. The
      per-CPU flush operation can be moved to within a workqueue.
      
      Because a workqueue handler, unlike IPI handler, does not disable irqs,
      flush_slab() now has to disable them for working with the kmem_cache_cpu
      fields. deactivate_slab() is safe to call with irqs enabled.
      
      [vbabka@suse.cz: adapt to new SLUB changes]
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      5a836bf6
    • V
      mm, slab: split out the cpu offline variant of flush_slab() · 08beb547
      Vlastimil Babka 提交于
      flush_slab() is called either as part IPI handler on given live cpu, or as a
      cleanup on behalf of another cpu that went offline. The first case needs to
      protect updating the kmem_cache_cpu fields with disabled irqs. Currently the
      whole call happens with irqs disabled by the IPI handler, but the following
      patch will change from IPI to workqueue, and flush_slab() will have to disable
      irqs (to be replaced with a local lock later) in the critical part.
      
      To prepare for this change, replace the call to flush_slab() for the dead cpu
      handling with an opencoded variant that will not disable irqs nor take a local
      lock.
      Suggested-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      08beb547
    • V
      mm, slub: don't disable irqs in slub_cpu_dead() · 0e7ac738
      Vlastimil Babka 提交于
      slub_cpu_dead() cleans up for an offlined cpu from another cpu and calls only
      functions that are now irq safe, so we don't need to disable irqs anymore.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      0e7ac738
    • V
      mm, slub: only disable irq with spin_lock in __unfreeze_partials() · 7cf9f3ba
      Vlastimil Babka 提交于
      __unfreeze_partials() no longer needs to have irqs disabled, except for making
      the spin_lock operations irq-safe, so convert the spin_locks operations and
      remove the separate irq handling.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      7cf9f3ba
    • V
      mm, slub: separate detaching of partial list in unfreeze_partials() from unfreezing · fc1455f4
      Vlastimil Babka 提交于
      Unfreezing partial list can be split to two phases - detaching the list from
      struct kmem_cache_cpu, and processing the list. The whole operation does not
      need to be protected by disabled irqs. Restructure the code to separate the
      detaching (with disabled irqs) and unfreezing (with irq disabling to be reduced
      in the next patch).
      
      Also, unfreeze_partials() can be called from another cpu on behalf of a cpu
      that is being offlined, where disabling irqs on the local cpu has no sense, so
      restructure the code as follows:
      
      - __unfreeze_partials() is the bulk of unfreeze_partials() that processes the
        detached percpu partial list
      - unfreeze_partials() detaches list from current cpu with irqs disabled and
        calls __unfreeze_partials()
      - unfreeze_partials_cpu() is to be called for the offlined cpu so it needs no
        irq disabling, and is called from __flush_cpu_slab()
      - flush_cpu_slab() is for the local cpu thus it needs to call
        unfreeze_partials(). So it can't simply call
        __flush_cpu_slab(smp_processor_id()) anymore and we have to open-code the
        proper calls.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      fc1455f4
    • V
      mm, slub: detach whole partial list at once in unfreeze_partials() · c2f973ba
      Vlastimil Babka 提交于
      Instead of iterating through the live percpu partial list, detach it from the
      kmem_cache_cpu at once. This is simpler and will allow further optimization.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      c2f973ba
    • V
      mm, slub: discard slabs in unfreeze_partials() without irqs disabled · 8de06a6f
      Vlastimil Babka 提交于
      No need for disabled irqs when discarding slabs, so restore them before
      discarding.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      8de06a6f
    • V
      mm, slub: move irq control into unfreeze_partials() · f3ab8b6b
      Vlastimil Babka 提交于
      unfreeze_partials() can be optimized so that it doesn't need irqs disabled for
      the whole time. As the first step, move irq control into the function and
      remove it from the put_cpu_partial() caller.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      f3ab8b6b
    • V
      mm, slub: call deactivate_slab() without disabling irqs · cfdf836e
      Vlastimil Babka 提交于
      The function is now safe to be called with irqs enabled, so move the calls
      outside of irq disabled sections.
      
      When called from ___slab_alloc() -> flush_slab() we have irqs disabled, so to
      reenable them before deactivate_slab() we need to open-code flush_slab() in
      ___slab_alloc() and reenable irqs after modifying the kmem_cache_cpu fields.
      But that means a IRQ handler meanwhile might have assigned a new page to
      kmem_cache_cpu.page so we have to retry the whole check.
      
      The remaining callers of flush_slab() are the IPI handler which has disabled
      irqs anyway, and slub_cpu_dead() which will be dealt with in the following
      patch.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      cfdf836e
    • V
      mm, slub: make locking in deactivate_slab() irq-safe · 3406e91b
      Vlastimil Babka 提交于
      dectivate_slab() now no longer touches the kmem_cache_cpu structure, so it will
      be possible to call it with irqs enabled. Just convert the spin_lock calls to
      their irq saving/restoring variants to make it irq-safe.
      
      Note we now have to use cmpxchg_double_slab() for irq-safe slab_lock(), because
      in some situations we don't take the list_lock, which would disable irqs.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      3406e91b
    • V
      mm, slub: move reset of c->page and freelist out of deactivate_slab() · a019d201
      Vlastimil Babka 提交于
      deactivate_slab() removes the cpu slab by merging the cpu freelist with slab's
      freelist and putting the slab on the proper node's list. It also sets the
      respective kmem_cache_cpu pointers to NULL.
      
      By extracting the kmem_cache_cpu operations from the function, we can make it
      not dependent on disabled irqs.
      
      Also if we return a single free pointer from ___slab_alloc, we no longer have
      to assign kmem_cache_cpu.page before deactivation or care if somebody preempted
      us and assigned a different page to our kmem_cache_cpu in the process.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      a019d201
    • V
      mm, slub: stop disabling irqs around get_partial() · 4b1f449d
      Vlastimil Babka 提交于
      The function get_partial() does not need to have irqs disabled as a whole. It's
      sufficient to convert spin_lock operations to their irq saving/restoring
      versions.
      
      As a result, it's now possible to reach the page allocator from the slab
      allocator without disabling and re-enabling interrupts on the way.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      4b1f449d
    • V
      mm, slub: check new pages with restored irqs · 9f101ee8
      Vlastimil Babka 提交于
      Building on top of the previous patch, re-enable irqs before checking new
      pages. alloc_debug_processing() is now called with enabled irqs so we need to
      remove VM_BUG_ON(!irqs_disabled()); in check_slab() - there doesn't seem to be
      a need for it anyway.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      9f101ee8
    • V
      mm, slub: validate slab from partial list or page allocator before making it cpu slab · 3f2b77e3
      Vlastimil Babka 提交于
      When we obtain a new slab page from node partial list or page allocator, we
      assign it to kmem_cache_cpu, perform some checks, and if they fail, we undo
      the assignment.
      
      In order to allow doing the checks without irq disabled, restructure the code
      so that the checks are done first, and kmem_cache_cpu.page assignment only
      after they pass.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      3f2b77e3
    • V
      mm, slub: restore irqs around calling new_slab() · 6c1dbb67
      Vlastimil Babka 提交于
      allocate_slab() currently re-enables irqs before calling to the page allocator.
      It depends on gfpflags_allow_blocking() to determine if it's safe to do so.
      Now we can instead simply restore irq before calling it through new_slab().
      The other caller early_kmem_cache_node_alloc() is unaffected by this.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      6c1dbb67
    • V
      mm, slub: move disabling irqs closer to get_partial() in ___slab_alloc() · fa417ab7
      Vlastimil Babka 提交于
      Continue reducing the irq disabled scope. Check for per-cpu partial slabs with
      first with irqs enabled and then recheck with irqs disabled before grabbing
      the slab page. Mostly preparatory for the following patches.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      fa417ab7
    • V
      mm, slub: do initial checks in ___slab_alloc() with irqs enabled · 0b303fb4
      Vlastimil Babka 提交于
      As another step of shortening irq disabled sections in ___slab_alloc(), delay
      disabling irqs until we pass the initial checks if there is a cached percpu
      slab and it's suitable for our allocation.
      
      Now we have to recheck c->page after actually disabling irqs as an allocation
      in irq handler might have replaced it.
      
      Because we call pfmemalloc_match() as one of the checks, we might hit
      VM_BUG_ON_PAGE(!PageSlab(page)) in PageSlabPfmemalloc in case we get
      interrupted and the page is freed. Thus introduce a pfmemalloc_match_unsafe()
      variant that lacks the PageSlab check.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMel Gorman <mgorman@techsingularity.net>
      0b303fb4
    • V
      mm, slub: move disabling/enabling irqs to ___slab_alloc() · e500059b
      Vlastimil Babka 提交于
      Currently __slab_alloc() disables irqs around the whole ___slab_alloc().  This
      includes cases where this is not needed, such as when the allocation ends up in
      the page allocator and has to awkwardly enable irqs back based on gfp flags.
      Also the whole kmem_cache_alloc_bulk() is executed with irqs disabled even when
      it hits the __slab_alloc() slow path, and long periods with disabled interrupts
      are undesirable.
      
      As a first step towards reducing irq disabled periods, move irq handling into
      ___slab_alloc(). Callers will instead prevent the s->cpu_slab percpu pointer
      from becoming invalid via get_cpu_ptr(), thus preempt_disable(). This does not
      protect against modification by an irq handler, which is still done by disabled
      irq for most of ___slab_alloc(). As a small immediate benefit,
      slab_out_of_memory() from ___slab_alloc() is now called with irqs enabled.
      
      kmem_cache_alloc_bulk() disables irqs for its fastpath and then re-enables them
      before calling ___slab_alloc(), which then disables them at its discretion. The
      whole kmem_cache_alloc_bulk() operation also disables preemption.
      
      When  ___slab_alloc() calls new_slab() to allocate a new page, re-enable
      preemption, because new_slab() will re-enable interrupts in contexts that allow
      blocking (this will be improved by later patches).
      
      The patch itself will thus increase overhead a bit due to disabled preemption
      (on configs where it matters) and increased disabling/enabling irqs in
      kmem_cache_alloc_bulk(), but that will be gradually improved in the following
      patches.
      
      Note in __slab_alloc() we need to change the #ifdef CONFIG_PREEMPT guard to
      CONFIG_PREEMPT_COUNT to make sure preempt disable/enable is properly paired in
      all configurations. On configs without involuntary preemption and debugging
      the re-read of kmem_cache_cpu pointer is still compiled out as it was before.
      
      [ Mike Galbraith <efault@gmx.de>: Fix kmem_cache_alloc_bulk() error path ]
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      e500059b
    • V
      mm, slub: simplify kmem_cache_cpu and tid setup · 9b4bc85a
      Vlastimil Babka 提交于
      In slab_alloc_node() and do_slab_free() fastpaths we need to guarantee that
      our kmem_cache_cpu pointer is from the same cpu as the tid value. Currently
      that's done by reading the tid first using this_cpu_read(), then the
      kmem_cache_cpu pointer and verifying we read the same tid using the pointer and
      plain READ_ONCE().
      
      This can be simplified to just fetching kmem_cache_cpu pointer and then reading
      tid using the pointer. That guarantees they are from the same cpu. We don't
      need to read the tid using this_cpu_read() because the value will be validated
      by this_cpu_cmpxchg_double(), making sure we are on the correct cpu and the
      freelist didn't change by anyone preempting us since reading the tid.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMel Gorman <mgorman@techsingularity.net>
      9b4bc85a
    • V
      mm, slub: restructure new page checks in ___slab_alloc() · 1572df7c
      Vlastimil Babka 提交于
      When we allocate slab object from a newly acquired page (from node's partial
      list or page allocator), we usually also retain the page as a new percpu slab.
      There are two exceptions - when pfmemalloc status of the page doesn't match our
      gfp flags, or when the cache has debugging enabled.
      
      The current code for these decisions is not easy to follow, so restructure it
      and add comments. The new structure will also help with the following changes.
      No functional change.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMel Gorman <mgorman@techsingularity.net>
      1572df7c
    • V
      mm, slub: return slab page from get_partial() and set c->page afterwards · 75c8ff28
      Vlastimil Babka 提交于
      The function get_partial() finds a suitable page on a partial list, acquires
      and returns its freelist and assigns the page pointer to kmem_cache_cpu.
      In later patch we will need more control over the kmem_cache_cpu.page
      assignment, so instead of passing a kmem_cache_cpu pointer, pass a pointer to a
      pointer to a page that get_partial() can fill and the caller can assign the
      kmem_cache_cpu.page pointer. No functional change as all of this still happens
      with disabled IRQs.
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      75c8ff28