1. 13 7月, 2019 1 次提交
  2. 25 6月, 2019 3 次提交
  3. 23 6月, 2019 2 次提交
  4. 21 6月, 2019 2 次提交
  5. 15 6月, 2019 1 次提交
    • M
      docs: kdump: convert docs to ReST and rename to *.rst · d67297ad
      Mauro Carvalho Chehab 提交于
      Convert kdump documentation to ReST and add it to the
      user faced manual, as the documents are mainly focused on
      sysadmins that would be enabling kdump.
      
      Note: the vmcoreinfo.rst has one very long title on one of its
      sub-sections:
      
      	PG_lru|PG_private|PG_swapcache|PG_swapbacked|PG_slab|PG_hwpoision|PG_head_mask|PAGE_BUDDY_MAPCOUNT_VALUE(~PG_buddy)|PAGE_OFFLINE_MAPCOUNT_VALUE(~PG_offline)
      
      I opted to break this one, into two entries with the same content,
      in order to make it easier to display after being parsed in html and PDF.
      
      The conversion is actually:
        - add blank lines and identation in order to identify paragraphs;
        - fix tables markups;
        - add some lists markups;
        - mark literal blocks;
        - adjust title markups.
      
      At its new index.rst, let's add a :orphan: while this is not linked to
      the main index.rst file, in order to avoid build warnings.
      Signed-off-by: NMauro Carvalho Chehab <mchehab+samsung@kernel.org>
      Signed-off-by: NJonathan Corbet <corbet@lwn.net>
      d67297ad
  6. 14 6月, 2019 1 次提交
    • B
      arm64: remove redundant 'default n' from Kconfig · 1a2a66db
      Bartlomiej Zolnierkiewicz 提交于
      'default n' is the default value for any bool or tristate Kconfig
      setting so there is no need to write it explicitly.
      
      Also since commit f467c564 ("kconfig: only write '# CONFIG_FOO
      is not set' for visible symbols") the Kconfig behavior is the same
      regardless of 'default n' being present or not:
      
          ...
          One side effect of (and the main motivation for) this change is making
          the following two definitions behave exactly the same:
      
              config FOO
                      bool
      
              config FOO
                      bool
                      default n
      
          With this change, neither of these will generate a
          '# CONFIG_FOO is not set' line (assuming FOO isn't selected/implied).
          That might make it clearer to people that a bare 'default n' is
          redundant.
          ...
      Signed-off-by: NBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1a2a66db
  7. 04 6月, 2019 1 次提交
    • M
      arm64: mm: make CONFIG_ZONE_DMA32 configurable · 0c1f14ed
      Miles Chen 提交于
      This change makes CONFIG_ZONE_DMA32 defuly y and allows users
      to overwrite it only when CONFIG_EXPERT=y.
      
      For the SoCs that do not need CONFIG_ZONE_DMA32, this is the
      first step to manage all available memory by a single
      zone(normal zone) to reduce the overhead of multiple zones.
      
      The change also fixes a build error when CONFIG_NUMA=y and
      CONFIG_ZONE_DMA32=n.
      
      arch/arm64/mm/init.c:195:17: error: use of undeclared identifier 'ZONE_DMA32'
                      max_zone_pfns[ZONE_DMA32] = PFN_DOWN(max_zone_dma_phys());
      
      Change since v1:
      1. only expose CONFIG_ZONE_DMA32 when CONFIG_EXPERT=y
      2. remove redundant IS_ENABLED(CONFIG_ZONE_DMA32)
      
      Cc: Robin Murphy <robin.murphy@arm.com>
      Signed-off-by: NMiles Chen <miles.chen@mediatek.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      0c1f14ed
  8. 24 5月, 2019 1 次提交
  9. 23 5月, 2019 2 次提交
    • M
      arm64: Handle erratum 1418040 as a superset of erratum 1188873 · a5325089
      Marc Zyngier 提交于
      We already mitigate erratum 1188873 affecting Cortex-A76 and
      Neoverse-N1 r0p0 to r2p0. It turns out that revisions r0p0 to
      r3p1 of the same cores are affected by erratum 1418040, which
      has the same workaround as 1188873.
      
      Let's expand the range of affected revisions to match 1418040,
      and repaint all occurences of 1188873 to 1418040. Whilst we're
      there, do a bit of reformating in silicon-errata.txt and drop
      a now unnecessary dependency on ARM_ARCH_TIMER_OOL_WORKAROUND.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      a5325089
    • W
      arm64: errata: Add workaround for Cortex-A76 erratum #1463225 · 969f5ea6
      Will Deacon 提交于
      Revisions of the Cortex-A76 CPU prior to r4p0 are affected by an erratum
      that can prevent interrupts from being taken when single-stepping.
      
      This patch implements a software workaround to prevent userspace from
      effectively being able to disable interrupts.
      
      Cc: <stable@vger.kernel.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      969f5ea6
  10. 21 5月, 2019 1 次提交
  11. 15 5月, 2019 4 次提交
  12. 14 5月, 2019 1 次提交
  13. 06 5月, 2019 1 次提交
  14. 01 5月, 2019 1 次提交
  15. 30 4月, 2019 2 次提交
  16. 29 4月, 2019 1 次提交
  17. 24 4月, 2019 4 次提交
    • M
      KVM: arm/arm64: Context-switch ptrauth registers · 384b40ca
      Mark Rutland 提交于
      When pointer authentication is supported, a guest may wish to use it.
      This patch adds the necessary KVM infrastructure for this to work, with
      a semi-lazy context switch of the pointer auth state.
      
      Pointer authentication feature is only enabled when VHE is built
      in the kernel and present in the CPU implementation so only VHE code
      paths are modified.
      
      When we schedule a vcpu, we disable guest usage of pointer
      authentication instructions and accesses to the keys. While these are
      disabled, we avoid context-switching the keys. When we trap the guest
      trying to use pointer authentication functionality, we change to eagerly
      context-switching the keys, and enable the feature. The next time the
      vcpu is scheduled out/in, we start again. However the host key save is
      optimized and implemented inside ptrauth instruction/register access
      trap.
      
      Pointer authentication consists of address authentication and generic
      authentication, and CPUs in a system might have varied support for
      either. Where support for either feature is not uniform, it is hidden
      from guests via ID register emulation, as a result of the cpufeature
      framework in the host.
      
      Unfortunately, address authentication and generic authentication cannot
      be trapped separately, as the architecture provides a single EL2 trap
      covering both. If we wish to expose one without the other, we cannot
      prevent a (badly-written) guest from intermittently using a feature
      which is not uniformly supported (when scheduled on a physical CPU which
      supports the relevant feature). Hence, this patch expects both type of
      authentication to be present in a cpu.
      
      This switch of key is done from guest enter/exit assembly as preparation
      for the upcoming in-kernel pointer authentication support. Hence, these
      key switching routines are not implemented in C code as they may cause
      pointer authentication key signing error in some situations.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
      , save host key in ptrauth exception trap]
      Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: NJulien Thierry <julien.thierry@arm.com>
      Cc: Christoffer Dall <christoffer.dall@arm.com>
      Cc: kvmarm@lists.cs.columbia.edu
      [maz: various fixups]
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      384b40ca
    • D
      arm64: Expose SVE2 features for userspace · 06a916fe
      Dave Martin 提交于
      This patch provides support for reporting the presence of SVE2 and
      its optional features to userspace.
      
      This will also enable visibility of SVE2 for guests, when KVM
      support for SVE-enabled guests is available.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      06a916fe
    • W
      arm64: Kconfig: Make CONFIG_COMPAT a menuconfig entry · dd523791
      Will Deacon 提交于
      Make CONFIG_COMPAT a menuconfig entry so that we can place
      CONFIG_KUSER_HELPERS and CONFIG_ARMV8_DEPRECATED underneath it.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      dd523791
    • V
      arm64: compat: Add KUSER_HELPERS config option · af1b3cf2
      Vincenzo Frascino 提交于
      When kuser helpers are enabled the kernel maps the relative code at
      a fixed address (0xffff0000). Making configurable the option to disable
      them means that the kernel can remove this mapping and any access to
      this memory area results in a sigfault.
      
      Add a KUSER_HELPERS config option that can be used to disable the
      mapping when it is turned off.
      
      This option can be turned off if and only if the applications are
      designed specifically for the platform and they do not make use of the
      kuser helpers code.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      [will: Use IS_ENABLED() instead of #ifdef]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      af1b3cf2
  18. 13 4月, 2019 1 次提交
  19. 09 4月, 2019 1 次提交
    • Y
      arm64: mm: enable per pmd page table lock · 54c8d911
      Yu Zhao 提交于
      Switch from per mm_struct to per pmd page table lock by enabling
      ARCH_ENABLE_SPLIT_PMD_PTLOCK. This provides better granularity for
      large system.
      
      I'm not sure if there is contention on mm->page_table_lock. Given
      the option comes at no cost (apart from initializing more spin
      locks), why not enable it now.
      
      We only do so when pmd is not folded, so we don't mistakenly call
      pgtable_pmd_page_ctor() on pud or p4d in pgd_pgtable_alloc().
      Signed-off-by: NYu Zhao <yuzhao@google.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      54c8d911
  20. 03 4月, 2019 2 次提交
    • W
      locking/rwsem: Remove rwsem-spinlock.c & use rwsem-xadd.c for all archs · 390a0c62
      Waiman Long 提交于
      Currently, we have two different implementation of rwsem:
      
       1) CONFIG_RWSEM_GENERIC_SPINLOCK (rwsem-spinlock.c)
       2) CONFIG_RWSEM_XCHGADD_ALGORITHM (rwsem-xadd.c)
      
      As we are going to use a single generic implementation for rwsem-xadd.c
      and no architecture-specific code will be needed, there is no point
      in keeping two different implementations of rwsem. In most cases, the
      performance of rwsem-spinlock.c will be worse. It also doesn't get all
      the performance tuning and optimizations that had been implemented in
      rwsem-xadd.c over the years.
      
      For simplication, we are going to remove rwsem-spinlock.c and make all
      architectures use a single implementation of rwsem - rwsem-xadd.c.
      
      All references to RWSEM_GENERIC_SPINLOCK and RWSEM_XCHGADD_ALGORITHM
      in the code are removed.
      Suggested-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NWaiman Long <longman@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-c6x-dev@linux-c6x.org
      Cc: linux-m68k@lists.linux-m68k.org
      Cc: linux-riscv@lists.infradead.org
      Cc: linux-um@lists.infradead.org
      Cc: linux-xtensa@linux-xtensa.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: nios2-dev@lists.rocketboards.org
      Cc: openrisc@lists.librecores.org
      Cc: uclinux-h8-devel@lists.sourceforge.jp
      Link: https://lkml.kernel.org/r/20190322143008.21313-3-longman@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      390a0c62
    • P
      asm-generic/tlb, arch: Invert CONFIG_HAVE_RCU_TABLE_INVALIDATE · 96bc9567
      Peter Zijlstra 提交于
      Make issuing a TLB invalidate for page-table pages the normal case.
      
      The reason is twofold:
      
       - too many invalidates is safer than too few,
       - most architectures use the linux page-tables natively
         and would thus require this.
      
      Make it an opt-out, instead of an opt-in.
      
      No change in behavior intended.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      96bc9567
  21. 21 3月, 2019 1 次提交
  22. 06 3月, 2019 1 次提交
  23. 01 3月, 2019 1 次提交
    • Z
      arm64: Add workaround for Fujitsu A64FX erratum 010001 · 3e32131a
      Zhang Lei 提交于
      On the Fujitsu-A64FX cores ver(1.0, 1.1), memory access may cause
      an undefined fault (Data abort, DFSC=0b111111). This fault occurs under
      a specific hardware condition when a load/store instruction performs an
      address translation. Any load/store instruction, except non-fault access
      including Armv8 and SVE might cause this undefined fault.
      
      The TCR_ELx.NFD1 bit is used by the kernel when CONFIG_RANDOMIZE_BASE
      is enabled to mitigate timing attacks against KASLR where the kernel
      address space could be probed using the FFR and suppressed fault on
      SVE loads.
      
      Since this erratum causes spurious exceptions, which may corrupt
      the exception registers, we clear the TCR_ELx.NFDx=1 bits when
      booting on an affected CPU.
      Signed-off-by: NZhang Lei <zhang.lei@jp.fujitsu.com>
      [Generated MIDR value/mask for __cpu_setup(), removed spurious-fault handler
       and always disabled the NFDx bits on affected CPUs]
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3e32131a
  24. 20 2月, 2019 1 次提交
  25. 14 2月, 2019 3 次提交