1. 20 1月, 2022 1 次提交
    • K
      mm: percpu: generalize percpu related config · 7ecd19cf
      Kefeng Wang 提交于
      Patch series "mm: percpu: Cleanup percpu first chunk function".
      
      When supporting page mapping percpu first chunk allocator on arm64, we
      found there are lots of duplicated codes in percpu embed/page first chunk
      allocator.  This patchset is aimed to cleanup them and should no function
      change.
      
      The currently supported status about 'embed' and 'page' in Archs shows
      below,
      
      	embed: NEED_PER_CPU_PAGE_FIRST_CHUNK
      	page:  NEED_PER_CPU_EMBED_FIRST_CHUNK
      
      		embed	page
      	------------------------
      	arm64	  Y	 Y
      	mips	  Y	 N
      	powerpc	  Y	 Y
      	riscv	  Y	 N
      	sparc	  Y	 Y
      	x86	  Y	 Y
      	------------------------
      
      There are two interfaces about percpu first chunk allocator,
      
       extern int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
                                      size_t atom_size,
                                      pcpu_fc_cpu_distance_fn_t cpu_distance_fn,
      -                               pcpu_fc_alloc_fn_t alloc_fn,
      -                               pcpu_fc_free_fn_t free_fn);
      +                               pcpu_fc_cpu_to_node_fn_t cpu_to_nd_fn);
      
       extern int __init pcpu_page_first_chunk(size_t reserved_size,
      -                               pcpu_fc_alloc_fn_t alloc_fn,
      -                               pcpu_fc_free_fn_t free_fn,
      -                               pcpu_fc_populate_pte_fn_t populate_pte_fn);
      +                               pcpu_fc_cpu_to_node_fn_t cpu_to_nd_fn);
      
      The pcpu_fc_alloc_fn_t/pcpu_fc_free_fn_t is killed, we provide generic
      pcpu_fc_alloc() and pcpu_fc_free() function, which are called in the
      pcpu_embed/page_first_chunk().
      
      1) For pcpu_embed_first_chunk(), pcpu_fc_cpu_to_node_fn_t is needed to be
         provided when archs supported NUMA.
      
      2) For pcpu_page_first_chunk(), the pcpu_fc_populate_pte_fn_t is killed too,
         a generic pcpu_populate_pte() which marked '__weak' is provided, if you
         need a different function to populate pte on the arch(like x86), please
         provide its own implementation.
      
      [1] https://github.com/kevin78/linux.git percpu-cleanup
      
      This patch (of 4):
      
      The HAVE_SETUP_PER_CPU_AREA/NEED_PER_CPU_EMBED_FIRST_CHUNK/
      NEED_PER_CPU_PAGE_FIRST_CHUNK/USE_PERCPU_NUMA_NODE_ID configs, which have
      duplicate definitions on platforms that subscribe it.
      
      Move them into mm, drop these redundant definitions and instead just
      select it on applicable platforms.
      
      Link: https://lkml.kernel.org/r/20211216112359.103822-1-wangkefeng.wang@huawei.com
      Link: https://lkml.kernel.org/r/20211216112359.103822-2-wangkefeng.wang@huawei.comSigned-off-by: NKefeng Wang <wangkefeng.wang@huawei.com>
      Acked-by: Catalin Marinas <catalin.marinas@arm.com>	[arm64]
      Cc: Will Deacon <will@kernel.org>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "Rafael J. Wysocki" <rafael@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7ecd19cf
  2. 16 1月, 2022 1 次提交
  3. 15 12月, 2021 1 次提交
  4. 14 12月, 2021 1 次提交
  5. 25 11月, 2021 1 次提交
  6. 07 11月, 2021 1 次提交
  7. 28 10月, 2021 3 次提交
  8. 26 10月, 2021 2 次提交
    • M
      irq: remove handle_domain_{irq,nmi}() · 0953fb26
      Mark Rutland 提交于
      Now that entry code handles IRQ entry (including setting the IRQ regs)
      before calling irqchip code, irqchip code can safely call
      generic_handle_domain_irq(), and there's no functional reason for it to
      call handle_domain_irq().
      
      Let's cement this split of responsibility and remove handle_domain_irq()
      entirely, updating irqchip drivers to call generic_handle_domain_irq().
      
      For consistency, handle_domain_nmi() is similarly removed and replaced
      with a generic_handle_domain_nmi() function which also does not perform
      any entry logic.
      
      Previously handle_domain_{irq,nmi}() had a WARN_ON() which would fire
      when they were called in an inappropriate context. So that we can
      identify similar issues going forward, similar WARN_ON_ONCE() logic is
      added to the generic_handle_*() functions, and comments are updated for
      clarity and consistency.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NMarc Zyngier <maz@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      0953fb26
    • M
      irq: arm64: perform irqentry in entry code · 26dc1293
      Mark Rutland 提交于
      In preparation for removing HANDLE_DOMAIN_IRQ_IRQENTRY, have arch/arm64
      perform all the irqentry accounting in its entry code.
      
      As arch/arm64 already performs portions of the irqentry logic in
      enter_from_kernel_mode() and exit_to_kernel_mode(), including
      rcu_irq_{enter,exit}(), the only additional calls that need to be made
      are to irq_{enter,exit}_rcu(). Removing the calls to
      rcu_irq_{enter,exit}() from handle_domain_irq() ensures that we inform
      RCU once per IRQ entry and will correctly identify quiescent periods.
      
      Since we should not call irq_{enter,exit}_rcu() when entering a
      pseudo-NMI, el1_interrupt() is reworked to have separate __el1_irq() and
      __el1_pnmi() paths for regular IRQ and psuedo-NMI entry, with
      irq_{enter,exit}_irq() only called for the former.
      
      In preparation for removing HANDLE_DOMAIN_IRQ, the irq regs are managed
      in do_interrupt_handler() for both regular IRQ and pseudo-NMI. This is
      currently redundant, but not harmful.
      
      For clarity the preemption logic is moved into __el1_irq(). We should
      never preempt within a pseudo-NMI, and arm64_enter_nmi() already
      enforces this by incrementing the preempt_count, but it's clearer if we
      never invoke the preemption logic when entering a pseudo-NMI.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NPingfan Liu <kernelfans@gmail.com>
      Reviewed-by: NMarc Zyngier <maz@kernel.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      26dc1293
  9. 25 10月, 2021 1 次提交
    • M
      irq: add a (temporary) CONFIG_HANDLE_DOMAIN_IRQ_IRQENTRY · 2fe35f8e
      Mark Rutland 提交于
      Going forward we want architecture/entry code to perform all the
      necessary work to enter/exit IRQ context, with irqchip code merely
      handling the mapping of the interrupt to any handler(s). Among other
      reasons, this is necessary to consistently fix some longstanding issues
      with the ordering of lockdep/RCU/tracing instrumentation which many
      architectures get wrong today in their entry code.
      
      Importantly, rcu_irq_{enter,exit}() must be called precisely once per
      IRQ exception, so that rcu_is_cpu_rrupt_from_idle() can correctly
      identify when an interrupt was taken from an idle context which must be
      explicitly preempted. Currently handle_domain_irq() calls
      rcu_irq_{enter,exit}() via irq_{enter,exit}(), but entry code needs to
      be able to call rcu_irq_{enter,exit}() earlier for correct ordering
      across lockdep/RCU/tracing updates for sequences such as:
      
        lockdep_hardirqs_off(CALLER_ADDR0);
        rcu_irq_enter();
        trace_hardirqs_off_finish();
      
      To permit each architecture to be converted to the new style in turn,
      this patch adds a new CONFIG_HANDLE_DOMAIN_IRQ_IRQENTRY selected by all
      current users of HANDLE_DOMAIN_IRQ, which gates the existing behaviour.
      When CONFIG_HANDLE_DOMAIN_IRQ_IRQENTRY is not selected,
      handle_domain_irq() requires entry code to perform the
      irq_{enter,exit}() work, with an explicit check for this matching the
      style of handle_domain_nmi().
      
      Subsequent patches will:
      
      1) Add the necessary IRQ entry accounting to each architecture in turn,
         dropping CONFIG_HANDLE_DOMAIN_IRQ_IRQENTRY from that architecture's
         Kconfig.
      
      2) Remove CONFIG_HANDLE_DOMAIN_IRQ_IRQENTRY once it is no longer
         selected.
      
      3) Convert irqchip drivers to consistently use
         generic_handle_domain_irq() rather than handle_domain_irq().
      
      4) Remove handle_domain_irq() and CONFIG_HANDLE_DOMAIN_IRQ.
      
      ... which should leave us with a clear split of responsiblity across the
      entry and irqchip code, making it possible to perform additional
      cleanups and fixes for the aforementioned longstanding issues with entry
      code.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NMarc Zyngier <maz@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      2fe35f8e
  10. 23 10月, 2021 1 次提交
  11. 22 10月, 2021 3 次提交
  12. 21 10月, 2021 1 次提交
  13. 15 10月, 2021 1 次提交
    • B
      sched: Add cluster scheduler level in core and related Kconfig for ARM64 · 778c558f
      Barry Song 提交于
      This patch adds scheduler level for clusters and automatically enables
      the load balance among clusters. It will directly benefit a lot of
      workload which loves more resources such as memory bandwidth, caches.
      
      Testing has widely been done in two different hardware configurations of
      Kunpeng920:
      
       24 cores in one NUMA(6 clusters in each NUMA node);
       32 cores in one NUMA(8 clusters in each NUMA node)
      
      Workload is running on either one NUMA node or four NUMA nodes, thus,
      this can estimate the effect of cluster spreading w/ and w/o NUMA load
      balance.
      
      * Stream benchmark:
      
      4threads stream (on 1NUMA * 24cores = 24cores)
                      stream                 stream
                      w/o patch              w/ patch
      MB/sec copy     29929.64 (   0.00%)    32932.68 (  10.03%)
      MB/sec scale    29861.10 (   0.00%)    32710.58 (   9.54%)
      MB/sec add      27034.42 (   0.00%)    32400.68 (  19.85%)
      MB/sec triad    27225.26 (   0.00%)    31965.36 (  17.41%)
      
      6threads stream (on 1NUMA * 24cores = 24cores)
                      stream                 stream
                      w/o patch              w/ patch
      MB/sec copy     40330.24 (   0.00%)    42377.68 (   5.08%)
      MB/sec scale    40196.42 (   0.00%)    42197.90 (   4.98%)
      MB/sec add      37427.00 (   0.00%)    41960.78 (  12.11%)
      MB/sec triad    37841.36 (   0.00%)    42513.64 (  12.35%)
      
      12threads stream (on 1NUMA * 24cores = 24cores)
                      stream                 stream
                      w/o patch              w/ patch
      MB/sec copy     52639.82 (   0.00%)    53818.04 (   2.24%)
      MB/sec scale    52350.30 (   0.00%)    53253.38 (   1.73%)
      MB/sec add      53607.68 (   0.00%)    55198.82 (   2.97%)
      MB/sec triad    54776.66 (   0.00%)    56360.40 (   2.89%)
      
      Thus, it could help memory-bound workload especially under medium load.
      Similar improvement is also seen in lkp-pbzip2:
      
      * lkp-pbzip2 benchmark
      
      2-96 threads (on 4NUMA * 24cores = 96cores)
                        lkp-pbzip2              lkp-pbzip2
                        w/o patch               w/ patch
      Hmean     tput-2   11062841.57 (   0.00%)  11341817.51 *   2.52%*
      Hmean     tput-5   26815503.70 (   0.00%)  27412872.65 *   2.23%*
      Hmean     tput-8   41873782.21 (   0.00%)  43326212.92 *   3.47%*
      Hmean     tput-12  61875980.48 (   0.00%)  64578337.51 *   4.37%*
      Hmean     tput-21 105814963.07 (   0.00%) 111381851.01 *   5.26%*
      Hmean     tput-30 150349470.98 (   0.00%) 156507070.73 *   4.10%*
      Hmean     tput-48 237195937.69 (   0.00%) 242353597.17 *   2.17%*
      Hmean     tput-79 360252509.37 (   0.00%) 362635169.23 *   0.66%*
      Hmean     tput-96 394571737.90 (   0.00%) 400952978.48 *   1.62%*
      
      2-24 threads (on 1NUMA * 24cores = 24cores)
                       lkp-pbzip2               lkp-pbzip2
                       w/o patch                w/ patch
      Hmean     tput-2   11071705.49 (   0.00%)  11296869.10 *   2.03%*
      Hmean     tput-4   20782165.19 (   0.00%)  21949232.15 *   5.62%*
      Hmean     tput-6   30489565.14 (   0.00%)  33023026.96 *   8.31%*
      Hmean     tput-8   40376495.80 (   0.00%)  42779286.27 *   5.95%*
      Hmean     tput-12  61264033.85 (   0.00%)  62995632.78 *   2.83%*
      Hmean     tput-18  86697139.39 (   0.00%)  86461545.74 (  -0.27%)
      Hmean     tput-24 104854637.04 (   0.00%) 104522649.46 *  -0.32%*
      
      In the case of 6 threads and 8 threads, we see the greatest performance
      improvement.
      
      Similar improvement can be seen on lkp-pixz though the improvement is
      smaller:
      
      * lkp-pixz benchmark
      
      2-24 threads lkp-pixz (on 1NUMA * 24cores = 24cores)
                        lkp-pixz               lkp-pixz
                        w/o patch              w/ patch
      Hmean     tput-2   6486981.16 (   0.00%)  6561515.98 *   1.15%*
      Hmean     tput-4  11645766.38 (   0.00%) 11614628.43 (  -0.27%)
      Hmean     tput-6  15429943.96 (   0.00%) 15957350.76 *   3.42%*
      Hmean     tput-8  19974087.63 (   0.00%) 20413746.98 *   2.20%*
      Hmean     tput-12 28172068.18 (   0.00%) 28751997.06 *   2.06%*
      Hmean     tput-18 39413409.54 (   0.00%) 39896830.55 *   1.23%*
      Hmean     tput-24 49101815.85 (   0.00%) 49418141.47 *   0.64%*
      
      * SPECrate benchmark
      
      4,8,16 copies mcf_r(on 1NUMA * 32cores = 32cores)
      		Base     	 	Base
      		Run Time   	 	Rate
      		-------  	 	---------
      4 Copies	w/o 580 (w/ 570)       	w/o 11.1 (w/ 11.3)
      8 Copies	w/o 647 (w/ 605)       	w/o 20.0 (w/ 21.4, +7%)
      16 Copies	w/o 844 (w/ 844)       	w/o 30.6 (w/ 30.6)
      
      32 Copies(on 4NUMA * 32 cores = 128cores)
      [w/o patch]
                       Base     Base        Base
      Benchmarks       Copies  Run Time     Rate
      --------------- -------  ---------  ---------
      500.perlbench_r      32        584       87.2  *
      502.gcc_r            32        503       90.2  *
      505.mcf_r            32        745       69.4  *
      520.omnetpp_r        32       1031       40.7  *
      523.xalancbmk_r      32        597       56.6  *
      525.x264_r            1         --            CE
      531.deepsjeng_r      32        336      109    *
      541.leela_r          32        556       95.4  *
      548.exchange2_r      32        513      163    *
      557.xz_r             32        530       65.2  *
       Est. SPECrate2017_int_base              80.3
      
      [w/ patch]
                        Base     Base        Base
      Benchmarks       Copies  Run Time     Rate
      --------------- -------  ---------  ---------
      500.perlbench_r      32        580      87.8 (+0.688%)  *
      502.gcc_r            32        477      95.1 (+5.432%)  *
      505.mcf_r            32        644      80.3 (+13.574%) *
      520.omnetpp_r        32        942      44.6 (+9.58%)   *
      523.xalancbmk_r      32        560      60.4 (+6.714%%) *
      525.x264_r            1         --           CE
      531.deepsjeng_r      32        337      109  (+0.000%) *
      541.leela_r          32        554      95.6 (+0.210%) *
      548.exchange2_r      32        515      163  (+0.000%) *
      557.xz_r             32        524      66.0 (+1.227%) *
       Est. SPECrate2017_int_base              83.7 (+4.062%)
      
      On the other hand, it is slightly helpful to CPU-bound tasks like
      kernbench:
      
      * 24-96 threads kernbench (on 4NUMA * 24cores = 96cores)
                           kernbench              kernbench
                           w/o cluster            w/ cluster
      Min       user-24    12054.67 (   0.00%)    12024.19 (   0.25%)
      Min       syst-24     1751.51 (   0.00%)     1731.68 (   1.13%)
      Min       elsp-24      600.46 (   0.00%)      598.64 (   0.30%)
      Min       user-48    12361.93 (   0.00%)    12315.32 (   0.38%)
      Min       syst-48     1917.66 (   0.00%)     1892.73 (   1.30%)
      Min       elsp-48      333.96 (   0.00%)      332.57 (   0.42%)
      Min       user-96    12922.40 (   0.00%)    12921.17 (   0.01%)
      Min       syst-96     2143.94 (   0.00%)     2110.39 (   1.56%)
      Min       elsp-96      211.22 (   0.00%)      210.47 (   0.36%)
      Amean     user-24    12063.99 (   0.00%)    12030.78 *   0.28%*
      Amean     syst-24     1755.20 (   0.00%)     1735.53 *   1.12%*
      Amean     elsp-24      601.60 (   0.00%)      600.19 (   0.23%)
      Amean     user-48    12362.62 (   0.00%)    12315.56 *   0.38%*
      Amean     syst-48     1921.59 (   0.00%)     1894.95 *   1.39%*
      Amean     elsp-48      334.10 (   0.00%)      332.82 *   0.38%*
      Amean     user-96    12925.27 (   0.00%)    12922.63 (   0.02%)
      Amean     syst-96     2146.66 (   0.00%)     2122.20 *   1.14%*
      Amean     elsp-96      211.96 (   0.00%)      211.79 (   0.08%)
      
      Note this patch isn't an universal win, it might hurt those workload
      which can benefit from packing. Though tasks which want to take
      advantages of lower communication latency of one cluster won't
      necessarily been packed in one cluster while kernel is not aware of
      clusters, they have some chance to be randomly packed. But this
      patch will make them more likely spread.
      Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com>
      Tested-by: NYicong Yang <yangyicong@hisilicon.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      778c558f
  14. 11 10月, 2021 1 次提交
  15. 07 10月, 2021 1 次提交
  16. 01 10月, 2021 2 次提交
  17. 14 9月, 2021 1 次提交
  18. 25 8月, 2021 1 次提交
    • W
      Partially revert "arm64/mm: drop HAVE_ARCH_PFN_VALID" · 3eb9cdff
      Will Deacon 提交于
      This partially reverts commit 16c9afc7.
      
      Alex Bee reports a regression in 5.14 on their RK3328 SoC when
      configuring the PL330 DMA controller:
      
       | ------------[ cut here ]------------
       | WARNING: CPU: 2 PID: 373 at kernel/dma/mapping.c:235 dma_map_resource+0x68/0xc0
       | Modules linked in: spi_rockchip(+) fuse
       | CPU: 2 PID: 373 Comm: systemd-udevd Not tainted 5.14.0-rc7 #1
       | Hardware name: Pine64 Rock64 (DT)
       | pstate: 80000005 (Nzcv daif -PAN -UAO -TCO BTYPE=--)
       | pc : dma_map_resource+0x68/0xc0
       | lr : pl330_prep_slave_fifo+0x78/0xd0
      
      This appears to be because dma_map_resource() is being called for a
      physical address which does not correspond to a memory address yet does
      have a valid 'struct page' due to the way in which the vmemmap is
      constructed.
      
      Prior to 16c9afc7 ("arm64/mm: drop HAVE_ARCH_PFN_VALID"), the arm64
      implementation of pfn_valid() called memblock_is_memory() to return
      'false' for such regions and the DMA mapping request would proceed.
      However, now that we are using the generic implementation where only the
      presence of the memory map entry is considered, we return 'true' and
      erroneously fail with DMA_MAPPING_ERROR because we identify the region
      as DRAM.
      
      Although fixing this in the DMA mapping code is arguably the right fix,
      it is a risky, cross-architecture change at this stage in the cycle. So
      just revert arm64 back to its old pfn_valid() implementation for v5.14.
      The change to the generic pfn_valid() code is preserved from the original
      patch, so as to avoid impacting other architectures.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Mike Rapoport <rppt@kernel.org>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Reported-by: NAlex Bee <knaerzche@gmail.com>
      Link: https://lore.kernel.org/r/d3a3c828-b777-faf8-e901-904995688437@gmail.comSigned-off-by: NWill Deacon <will@kernel.org>
      3eb9cdff
  19. 16 8月, 2021 1 次提交
  20. 03 8月, 2021 1 次提交
  21. 30 7月, 2021 1 次提交
    • A
      asm-generic: reverse GENERIC_{STRNCPY_FROM,STRNLEN}_USER symbols · e6226997
      Arnd Bergmann 提交于
      Most architectures do not need a custom implementation, and in most
      cases the generic implementation is preferred, so change the polariy
      on these Kconfig symbols to require architectures to select them when
      they provide their own version.
      
      The new name is CONFIG_ARCH_HAS_{STRNCPY_FROM,STRNLEN}_USER.
      
      The remaining architectures at the moment are: ia64, mips, parisc,
      um and xtensa. We should probably convert these as well, but
      I was not sure how far to take this series. Thomas Bogendoerfer
      had some concerns about converting mips but may still do some
      more detailed measurements to see which version is better.
      
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: linux-ia64@vger.kernel.org
      Cc: linux-mips@vger.kernel.org
      Cc: linux-parisc@vger.kernel.org
      Cc: linux-s390@vger.kernel.org
      Cc: linux-um@lists.infradead.org
      Cc: linux-xtensa@linux-xtensa.org
      Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Acked-by: Helge Deller <deller@gmx.de> # parisc
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      e6226997
  22. 13 7月, 2021 1 次提交
  23. 01 7月, 2021 4 次提交
  24. 30 6月, 2021 1 次提交
  25. 23 6月, 2021 1 次提交
  26. 15 6月, 2021 2 次提交
  27. 26 5月, 2021 2 次提交
  28. 06 5月, 2021 2 次提交