1. 09 2月, 2021 1 次提交
  2. 27 1月, 2021 3 次提交
  3. 19 1月, 2021 7 次提交
  4. 31 12月, 2020 2 次提交
  5. 30 12月, 2020 1 次提交
  6. 23 12月, 2020 1 次提交
  7. 21 12月, 2020 1 次提交
  8. 20 12月, 2020 1 次提交
  9. 16 12月, 2020 14 次提交
    • H
      s390/idle: allow arch_cpu_idle() to be kprobed · 8d93b701
      Heiko Carstens 提交于
      Remove NOKPROBE_SYMBOL() for arch_cpu_idle(). This might have made
      sense when enabled_wait() (aka arch_cpu_idle()) was called from
      udelay.
      But now there shouldn't be a reason why s390 should be the only
      architecture which doesn't allow arch_cpu_idle() to be probed.
      Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
      8d93b701
    • H
      s390/idle: remove raw_local_irq_save()/restore() from arch_cpu_idle() · 7494755a
      Heiko Carstens 提交于
      arch_cpu_idle() gets called with interrupts disabled,
      and psw_idle() returns with interrupts disabled.
      No reason to use raw_local_irq_save() / restore().
      Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
      7494755a
    • H
      s390/idle: merge enabled_wait() and arch_cpu_idle() · 44292c86
      Heiko Carstens 提交于
      The only caller of enabled_wait() besides arch_cpu_idle() was
      udelay(). Since that call doesn't exist anymore, merge enabled_wait()
      and arch_cpu_idle().
      Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
      44292c86
    • H
      s390/delay: remove udelay_simple() · e0d62dcb
      Heiko Carstens 提交于
      udelay_simple() callers can make use of the now simplified udelay()
      implementation. No need to keep it.
      Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
      e0d62dcb
    • H
      s390/irq: select HAVE_IRQ_EXIT_ON_IRQ_STACK · 9ceed998
      Heiko Carstens 提交于
      irq_exit() is always called on async stack. Therefore select
      HAVE_IRQ_EXIT_ON_IRQ_STACK and get a tiny optimization in
      invoke_softirq().
      Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
      9ceed998
    • H
      s390/delay: simplify udelay · dd6cfe55
      Heiko Carstens 提交于
      udelay is implemented by using quite subtle details to make it
      possible to load an idle psw and waiting for an interrupt even in irq
      context or when interrupts are disabled. Also handling (or better: no
      handling) of softirqs is taken into account.
      
      All this is done to optimize for something which should in normal
      circumstances never happen: calling udelay to busy wait. Therefore get
      rid of the whole complexity and just busy loop like other
      architectures are doing it also.
      
      It could have been possible to use diag 0x44 instead of cpu_relax() in
      the busy loop, however we have seen too many bad things happen with
      diag 0x44 that it seems to be better to simply busy loop.
      
      Also note that with this new implementation kernel preemption does
      work when within the udelay loop. This did not work before.
      
      To get a feeling what the former code optimizes for: IPL'ing a kernel
      with 'defconfig' and afterwards compiling a kernel ends with a total
      of zero udelay calls.
      Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
      dd6cfe55
    • H
      s390/test_unwind: use timer instead of udelay · 91c2bad6
      Heiko Carstens 提交于
      Instead of registering an external interrupt handler and relying on
      the udelay implementation, simply use a timer to get into irq context.
      Acked-by: NIlya Leoshkevich <iii@linux.ibm.com>
      Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
      91c2bad6
    • H
      s390/test_unwind: fix CALL_ON_STACK tests · f22b9c21
      Heiko Carstens 提交于
      The CALL_ON_STACK tests use the no_dat stack to switch to a different
      stack for unwinding tests. If an interrupt or machine check happens
      while using that stack, and previously being on the async stack, the
      interrupt / machine check entry code (SWITCH_ASYNC) will assume that
      the previous context did not use the async stack and happily use the
      async stack again.
      
      This will lead to stack corruption of the previous context.
      
      To solve this disable both interrupts and machine checks before
      switching to the no_dat stack.
      
      Fixes: 7868249f ("s390/test_unwind: add CALL_ON_STACK tests")
      Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
      f22b9c21
    • H
      s390: make calls to TRACE_IRQS_OFF/TRACE_IRQS_ON balanced · f0c7cf13
      Heiko Carstens 提交于
      In case of udelay CIF_IGNORE_IRQ is set. This leads to an unbalanced
      call of TRACE_IRQS_OFF and TRACE_IRQS_ON. That is: from lockdep's
      point of view TRACE_IRQS_ON is called one time too often.
      
      This doesn't fix any real bug, just makes the calls balanced.
      Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
      f0c7cf13
    • H
      s390: always clear kernel stack backchain before calling functions · 9365965d
      Heiko Carstens 提交于
      Clear the kernel stack backchain before potentially calling the
      lockdep trace_hardirqs_off/on functions. Without this walking the
      kernel backchain, e.g. during a panic, might stop too early.
      Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
      9365965d
    • C
      mm: simplify follow_pte{,pmd} · ff5c19ed
      Christoph Hellwig 提交于
      Merge __follow_pte_pmd, follow_pte_pmd and follow_pte into a single
      follow_pte function and just pass two additional NULL arguments for the
      two previous follow_pte callers.
      
      [sfr@canb.auug.org.au: merge fix for "s390/pci: remove races against pte updates"]
        Link: https://lkml.kernel.org/r/20201111221254.7f6a3658@canb.auug.org.au
      
      Link: https://lkml.kernel.org/r/20201029101432.47011-3-hch@lst.deSigned-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ff5c19ed
    • M
      arch, mm: restore dependency of __kernel_map_pages() on DEBUG_PAGEALLOC · 5d6ad668
      Mike Rapoport 提交于
      The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must
      never fail.  With this assumption is wouldn't be safe to allow general
      usage of this function.
      
      Moreover, some architectures that implement __kernel_map_pages() have this
      function guarded by #ifdef DEBUG_PAGEALLOC and some refuse to map/unmap
      pages when page allocation debugging is disabled at runtime.
      
      As all the users of __kernel_map_pages() were converted to use
      debug_pagealloc_map_pages() it is safe to make it available only when
      DEBUG_PAGEALLOC is set.
      
      Link: https://lkml.kernel.org/r/20201109192128.960-4-rppt@kernel.orgSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: NDavid Hildenbrand <david@redhat.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5d6ad668
    • D
      mm: forbid splitting special mappings · 871402e0
      Dmitry Safonov 提交于
      Don't allow splitting of vm_special_mapping's.  It affects vdso/vvar
      areas.  Uprobes have only one page in xol_area so they aren't affected.
      
      Those restrictions were enforced by checks in .mremap() callbacks.
      Restrict resizing with generic .split() callback.
      
      Link: https://lkml.kernel.org/r/20201013013416.390574-7-dima@arista.comSigned-off-by: NDmitry Safonov <dima@arista.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Brian Geffon <bgeffon@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Jiang <dave.jiang@intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jason Gunthorpe <jgg@ziepe.ca>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vishal Verma <vishal.l.verma@intel.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      871402e0
    • J
      mm/gup_benchmark: rename to mm/gup_test · 9c84f229
      John Hubbard 提交于
      Patch series "selftests/vm: gup_test, hmm-tests, assorted improvements", v3.
      
      Summary: This series provides two main things, and a number of smaller
      supporting goodies.  The two main points are:
      
      1) Add a new sub-test to gup_test, which in turn is a renamed version
         of gup_benchmark.  This sub-test allows nicer testing of dump_pages(),
         at least on user-space pages.
      
         For quite a while, I was doing a quick hack to gup_test.c whenever I
         wanted to try out changes to dump_page().  Then Matthew Wilcox asked me
         what I meant when I said "I used my dump_page() unit test", and I
         realized that it might be nice to check in a polished up version of
         that.
      
         Details about how it works and how to use it are in the commit
         description for patch #6 ("selftests/vm: gup_test: introduce the
         dump_pages() sub-test").
      
      2) Fixes a limitation of hmm-tests: these tests are incredibly useful,
         but only if people actually build and run them.  And it turns out that
         libhugetlbfs is a little too effective at throwing a wrench in the
         works, there.  So I've added a little configuration check that removes
         just two of the 21 hmm-tests, if libhugetlbfs is not available.
      
         Further details in the commit description of patch #8
         ("selftests/vm: hmm-tests: remove the libhugetlbfs dependency").
      
      Other smaller things that this series does:
      
      a) Remove code duplication by creating gup_test.h.
      
      b) Clear up the sub-test organization, and their invocation within
         run_vmtests.sh.
      
      c) Other minor assorted improvements.
      
      [1] v2 is here:
      https://lore.kernel.org/linux-doc/20200929212747.251804-1-jhubbard@nvidia.com/
      
      [2] https://lore.kernel.org/r/CAHk-=wgh-TMPHLY3jueHX7Y2fWh3D+nMBqVS__AZm6-oorquWA@mail.gmail.com
      
      This patch (of 9):
      
      Rename nearly every "gup_benchmark" reference and file name to "gup_test".
      The one exception is for the actual gup benchmark test itself.
      
      The current code already does a *little* bit more than benchmarking, and
      definitely covers more than get_user_pages_fast().  More importantly,
      however, subsequent patches are about to add some functionality that is
      non-benchmark related.
      
      Closely related changes:
      
      * Kconfig: in addition to renaming the options from GUP_BENCHMARK to
        GUP_TEST, update the help text to reflect that it's no longer a
        benchmark-only test.
      
      Link: https://lkml.kernel.org/r/20201026064021.3545418-1-jhubbard@nvidia.com
      Link: https://lkml.kernel.org/r/20201026064021.3545418-2-jhubbard@nvidia.comSigned-off-by: NJohn Hubbard <jhubbard@nvidia.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Jérôme Glisse <jglisse@redhat.com>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9c84f229
  10. 15 12月, 2020 1 次提交
  11. 11 12月, 2020 2 次提交
    • G
      s390/mm: add support to allocate gigantic hugepages using CMA · 343dbdb7
      Gerald Schaefer 提交于
      Commit cf11e85f ("mm: hugetlb: optionally allocate gigantic hugepages
      using cma") added support for allocating gigantic hugepages using CMA,
      by specifying the hugetlb_cma= kernel parameter, which will disable any
      boot-time allocation of gigantic hugepages.
      
      This patch enables that option also for s390.
      Signed-off-by: NGerald Schaefer <gerald.schaefer@linux.ibm.com>
      Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
      343dbdb7
    • H
      s390/crypto: add arch_get_random_long() support · ff98cc98
      Harald Freudenberger 提交于
      The random longs to be pulled by arch_get_random_long() are
      prepared in an 4K buffer which is filled from the NIST 800-90
      compliant s390 drbg. By default the random long buffer is refilled
      256 times before the drbg itself needs a reseed. The reseed of the
      drbg is done with 32 bytes fetched from the high quality (but slow)
      trng which is assumed to deliver 100% entropy. So the 32 * 8 = 256
      bits of entropy are spread over 256 * 4KB = 1MB serving 131072
      arch_get_random_long() invocations before reseeded.
      
      How often the 4K random long buffer is refilled with the drbg
      before the drbg is reseeded can be adjusted. There is a module
      parameter 's390_arch_rnd_long_drbg_reseed' accessible via
        /sys/module/arch_random/parameters/rndlong_drbg_reseed
      or as kernel command line parameter
        arch_random.rndlong_drbg_reseed=<value>
      This parameter tells how often the drbg fills the 4K buffer before
      it is re-seeded by fresh entropy from the trng.
      A value of 16 results in reseeding the drbg at every 16 * 4 KB = 64
      KB with 32 bytes of fresh entropy pulled from the trng. So a value
      of 16 would result in 256 bits entropy per 64 KB.
      A value of 256 results in 1MB of drbg output before a reseed of the
      drbg is done. So this would spread the 256 bits of entropy among 1MB.
      Setting this parameter to 0 forces the reseed to take place every
      time the 4K buffer is depleted, so the entropy rises to 256 bits
      entropy per 4K or 0.5 bit entropy per arch_get_random_long().  With
      setting this parameter to negative values all this effort is
      disabled, arch_get_random long() returns false and thus indicating
      that the arch_get_random_long() feature is disabled at all.
      
      arch_get_random_long() is used by random.c among others to provide
      an initial hash value to be mixed with the entropy pool on every
      random data pull. For about 64 bytes read from /dev/urandom there
      is one call to arch_get_random_long(). So these additional random
      long values count for performance of /dev/urandom with measurable
      but low penalty.
      Signed-off-by: NHarald Freudenberger <freude@linux.ibm.com>
      Reviewed-by: NIngo Franzki <ifranzki@linux.ibm.com>
      Reviewed-by: NJuergen Christ <jchrist@linux.ibm.com>
      Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
      ff98cc98
  12. 10 12月, 2020 6 次提交