1. 29 6月, 2018 2 次提交
  2. 28 6月, 2018 8 次提交
  3. 27 6月, 2018 2 次提交
  4. 24 6月, 2018 1 次提交
  5. 23 6月, 2018 4 次提交
  6. 21 6月, 2018 4 次提交
    • D
      KVM: arm64: Avoid mistaken attempts to save SVE state for vcpus · 2955bcc8
      Dave Martin 提交于
      Commit e6b673b7 ("KVM: arm64: Optimise FPSIMD handling to reduce
      guest/host thrashing") uses fpsimd_save() to save the FPSIMD state
      for a vcpu when scheduling the vcpu out.  However, currently
      current's value of TIF_SVE is restored before calling fpsimd_save()
      which means that fpsimd_save() may erroneously attempt to save SVE
      state from the vcpu.  This enables current's vector state to be
      polluted with guest data.  current->thread.sve_state may be
      unallocated or not large enough, so this can also trigger a NULL
      dereference or buffer overrun.
      
      Instead of this, TIF_SVE should be configured properly for the
      guest when calling fpsimd_save() with the vcpu context loaded.
      
      This patch ensures this by delaying restoration of current's
      TIF_SVE until after the call to fpsimd_save().
      
      Fixes: e6b673b7 ("KVM: arm64: Optimise FPSIMD handling to reduce guest/host thrashing")
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      2955bcc8
    • D
      KVM: arm64/sve: Fix SVE trap restoration for non-current tasks · b3eb56b6
      Dave Martin 提交于
      Commit e6b673b7 ("KVM: arm64: Optimise FPSIMD handling to reduce
      guest/host thrashing") attempts to restore the configuration of
      userspace SVE trapping via a call to fpsimd_bind_task_to_cpu(), but
      the logic for determining when to do this is not correct.
      
      The patch makes the errnoenous assumption that the only task that
      may try to enter userspace with the currently loaded FPSIMD/SVE
      register content is current.  This may not be the case however:  if
      some other user task T is scheduled on the CPU during the execution
      of the KVM run loop, and the vcpu does not try to use the registers
      in the meantime, then T's state may be left there intact.  If T
      happens to be the next task to enter userspace on this CPU then the
      hooks for reloading the register state and configuring traps will
      be skipped.
      
      (Also, current never has SVE state at this point anyway and should
      always have the trap enabled, as a side-effect of the ioctl()
      syscall needed to reach the KVM run loop in the first place.)
      
      This patch instead restores the state of the EL0 trap from the
      state observed at the most recent vcpu_load(), ensuring that the
      trap is set correctly for the loaded context (if any).
      
      Fixes: e6b673b7 ("KVM: arm64: Optimise FPSIMD handling to reduce guest/host thrashing")
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      b3eb56b6
    • D
      KVM: arm64: Don't mask softirq with IRQs disabled in vcpu_put() · b045e4d0
      Dave Martin 提交于
      Commit e6b673b7 ("KVM: arm64: Optimise FPSIMD handling to reduce
      guest/host thrashing") introduces a specific helper
      kvm_arch_vcpu_put_fp() for saving the vcpu FPSIMD state during
      vcpu_put().
      
      This function uses local_bh_disable()/_enable() to protect the
      FPSIMD context manipulation from interruption by softirqs.
      
      This approach is not correct, because vcpu_put() can be invoked
      either from the KVM host vcpu thread (when exiting the vcpu run
      loop), or via a preempt notifier.  In the former case, only
      preemption is disabled.  In the latter case, the function is called
      from inside __schedule(), which means that IRQs are disabled.
      
      Use of local_bh_disable()/_enable() with IRQs disabled is considerd
      an error, resulting in lockdep splats while running VMs if lockdep
      is enabled.
      
      This patch disables IRQs instead of attempting to disable softirqs,
      avoiding the problem of calling local_bh_enable() with IRQs
      disabled in the __schedule() path.  This creates an additional
      interrupt blackout during vcpu run loop exit, but this is the rare
      case and the blackout latency is still less than that of
      __schedule().
      
      Fixes: e6b673b7 ("KVM: arm64: Optimise FPSIMD handling to reduce guest/host thrashing")
      Reported-by: NAndre Przywara <andre.przywara@arm.com>
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      b045e4d0
    • M
      arm64: Introduce sysreg_clear_set() · 6ebdf4db
      Mark Rutland 提交于
      Currently we have a couple of helpers to manipulate bits in particular
      sysregs:
      
       * config_sctlr_el1(u32 clear, u32 set)
      
       * change_cpacr(u64 val, u64 mask)
      
      The parameters of these differ in naming convention, order, and size,
      which is unfortunate. They also differ slightly in behaviour, as
      change_cpacr() skips the sysreg write if the bits are unchanged, which
      is a useful optimization when sysreg writes are expensive.
      
      Before we gain yet another sysreg manipulation function, let's
      unify these with a common helper, providing a consistent order for
      clear/set operands, and the write skipping behaviour from
      change_cpacr(). Code will be migrated to the new helper in subsequent
      patches.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      6ebdf4db
  7. 19 6月, 2018 7 次提交
  8. 16 6月, 2018 1 次提交
  9. 15 6月, 2018 3 次提交
  10. 14 6月, 2018 1 次提交
    • L
      Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variables · 050e9baa
      Linus Torvalds 提交于
      The changes to automatically test for working stack protector compiler
      support in the Kconfig files removed the special STACKPROTECTOR_AUTO
      option that picked the strongest stack protector that the compiler
      supported.
      
      That was all a nice cleanup - it makes no sense to have the AUTO case
      now that the Kconfig phase can just determine the compiler support
      directly.
      
      HOWEVER.
      
      It also meant that doing "make oldconfig" would now _disable_ the strong
      stackprotector if you had AUTO enabled, because in a legacy config file,
      the sane stack protector configuration would look like
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        # CONFIG_CC_STACKPROTECTOR_NONE is not set
        # CONFIG_CC_STACKPROTECTOR_REGULAR is not set
        # CONFIG_CC_STACKPROTECTOR_STRONG is not set
        CONFIG_CC_STACKPROTECTOR_AUTO=y
      
      and when you ran this through "make oldconfig" with the Kbuild changes,
      it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had
      been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just
      CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version
      used to be disabled (because it was really enabled by AUTO), and would
      disable it in the new config, resulting in:
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
        CONFIG_CC_STACKPROTECTOR=y
        # CONFIG_CC_STACKPROTECTOR_STRONG is not set
        CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
      
      That's dangerously subtle - people could suddenly find themselves with
      the weaker stack protector setup without even realizing.
      
      The solution here is to just rename not just the old RECULAR stack
      protector option, but also the strong one.  This does that by just
      removing the CC_ prefix entirely for the user choices, because it really
      is not about the compiler support (the compiler support now instead
      automatially impacts _visibility_ of the options to users).
      
      This results in "make oldconfig" actually asking the user for their
      choice, so that we don't have any silent subtle security model changes.
      The end result would generally look like this:
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
        CONFIG_STACKPROTECTOR=y
        CONFIG_STACKPROTECTOR_STRONG=y
        CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
      
      where the "CC_" versions really are about internal compiler
      infrastructure, not the user selections.
      Acked-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      050e9baa
  11. 13 6月, 2018 1 次提交
    • K
      treewide: kzalloc() -> kcalloc() · 6396bb22
      Kees Cook 提交于
      The kzalloc() function has a 2-factor argument form, kcalloc(). This
      patch replaces cases of:
      
              kzalloc(a * b, gfp)
      
      with:
              kcalloc(a * b, gfp)
      
      as well as handling cases of:
      
              kzalloc(a * b * c, gfp)
      
      with:
      
              kzalloc(array3_size(a, b, c), gfp)
      
      as it's slightly less ugly than:
      
              kzalloc_array(array_size(a, b), c, gfp)
      
      This does, however, attempt to ignore constant size factors like:
      
              kzalloc(4 * 1024, gfp)
      
      though any constants defined via macros get caught up in the conversion.
      
      Any factors with a sizeof() of "unsigned char", "char", and "u8" were
      dropped, since they're redundant.
      
      The Coccinelle script used for this was:
      
      // Fix redundant parens around sizeof().
      @@
      type TYPE;
      expression THING, E;
      @@
      
      (
        kzalloc(
      -	(sizeof(TYPE)) * E
      +	sizeof(TYPE) * E
        , ...)
      |
        kzalloc(
      -	(sizeof(THING)) * E
      +	sizeof(THING) * E
        , ...)
      )
      
      // Drop single-byte sizes and redundant parens.
      @@
      expression COUNT;
      typedef u8;
      typedef __u8;
      @@
      
      (
        kzalloc(
      -	sizeof(u8) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(__u8) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(char) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(unsigned char) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(u8) * COUNT
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(__u8) * COUNT
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(char) * COUNT
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(unsigned char) * COUNT
      +	COUNT
        , ...)
      )
      
      // 2-factor product with sizeof(type/expression) and identifier or constant.
      @@
      type TYPE;
      expression THING;
      identifier COUNT_ID;
      constant COUNT_CONST;
      @@
      
      (
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * (COUNT_ID)
      +	COUNT_ID, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * COUNT_ID
      +	COUNT_ID, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * (COUNT_CONST)
      +	COUNT_CONST, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * COUNT_CONST
      +	COUNT_CONST, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * (COUNT_ID)
      +	COUNT_ID, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * COUNT_ID
      +	COUNT_ID, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * (COUNT_CONST)
      +	COUNT_CONST, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * COUNT_CONST
      +	COUNT_CONST, sizeof(THING)
        , ...)
      )
      
      // 2-factor product, only identifiers.
      @@
      identifier SIZE, COUNT;
      @@
      
      - kzalloc
      + kcalloc
        (
      -	SIZE * COUNT
      +	COUNT, SIZE
        , ...)
      
      // 3-factor product with 1 sizeof(type) or sizeof(expression), with
      // redundant parens removed.
      @@
      expression THING;
      identifier STRIDE, COUNT;
      type TYPE;
      @@
      
      (
        kzalloc(
      -	sizeof(TYPE) * (COUNT) * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE) * (COUNT) * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE) * COUNT * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE) * COUNT * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * (COUNT) * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * (COUNT) * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * COUNT * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * COUNT * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      )
      
      // 3-factor product with 2 sizeof(variable), with redundant parens removed.
      @@
      expression THING1, THING2;
      identifier COUNT;
      type TYPE1, TYPE2;
      @@
      
      (
        kzalloc(
      -	sizeof(TYPE1) * sizeof(TYPE2) * COUNT
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
        , ...)
      |
        kzalloc(
      -	sizeof(THING1) * sizeof(THING2) * COUNT
      +	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
        , ...)
      |
        kzalloc(
      -	sizeof(THING1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * COUNT
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
        , ...)
      )
      
      // 3-factor product, only identifiers, with redundant parens removed.
      @@
      identifier STRIDE, SIZE, COUNT;
      @@
      
      (
        kzalloc(
      -	(COUNT) * STRIDE * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * (STRIDE) * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * STRIDE * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	(COUNT) * (STRIDE) * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * (STRIDE) * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	(COUNT) * STRIDE * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	(COUNT) * (STRIDE) * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * STRIDE * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      )
      
      // Any remaining multi-factor products, first at least 3-factor products,
      // when they're not all constants...
      @@
      expression E1, E2, E3;
      constant C1, C2, C3;
      @@
      
      (
        kzalloc(C1 * C2 * C3, ...)
      |
        kzalloc(
      -	(E1) * E2 * E3
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kzalloc(
      -	(E1) * (E2) * E3
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kzalloc(
      -	(E1) * (E2) * (E3)
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kzalloc(
      -	E1 * E2 * E3
      +	array3_size(E1, E2, E3)
        , ...)
      )
      
      // And then all remaining 2 factors products when they're not all constants,
      // keeping sizeof() as the second factor argument.
      @@
      expression THING, E1, E2;
      type TYPE;
      constant C1, C2, C3;
      @@
      
      (
        kzalloc(sizeof(THING) * C2, ...)
      |
        kzalloc(sizeof(TYPE) * C2, ...)
      |
        kzalloc(C1 * C2 * C3, ...)
      |
        kzalloc(C1 * C2, ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * (E2)
      +	E2, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * E2
      +	E2, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * (E2)
      +	E2, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * E2
      +	E2, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	(E1) * E2
      +	E1, E2
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	(E1) * (E2)
      +	E1, E2
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	E1 * E2
      +	E1, E2
        , ...)
      )
      Signed-off-by: NKees Cook <keescook@chromium.org>
      6396bb22
  12. 08 6月, 2018 4 次提交
    • D
      arm64: Fix syscall restarting around signal suppressed by tracer · 0fe42512
      Dave Martin 提交于
      Commit 17c28958 ("arm64: Abstract syscallno manipulation") abstracts
      out the pt_regs.syscallno value for a syscall cancelled by a tracer
      as NO_SYSCALL, and provides helpers to set and check for this
      condition.  However, the way this was implemented has the
      unintended side-effect of disabling part of the syscall restart
      logic.
      
      This comes about because the second in_syscall() check in
      do_signal() re-evaluates the "in a syscall" condition based on the
      updated pt_regs instead of the original pt_regs.  forget_syscall()
      is explicitly called prior to the second check in order to prevent
      restart logic in the ret_to_user path being spuriously triggered,
      which means that the second in_syscall() check always yields false.
      
      This triggers a failure in
      tools/testing/selftests/seccomp/seccomp_bpf.c, when using ptrace to
      suppress a signal that interrups a nanosleep() syscall.
      
      Misbehaviour of this type is only expected in the case where a
      tracer suppresses a signal and the target process is either being
      single-stepped or the interrupted syscall attempts to restart via
      -ERESTARTBLOCK.
      
      This patch restores the old behaviour by performing the
      in_syscall() check only once at the start of the function.
      
      Fixes: 17c28958 ("arm64: Abstract syscallno manipulation")
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reported-by: NSumit Semwal <sumit.semwal@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: <stable@vger.kernel.org> # 4.14.x-
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      0fe42512
    • M
      arm64: move GCC version check for ARCH_SUPPORTS_INT128 to Kconfig · f3a53f7b
      Masahiro Yamada 提交于
      This becomes much neater in Kconfig.
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      f3a53f7b
    • L
      mm: introduce ARCH_HAS_PTE_SPECIAL · 3010a5ea
      Laurent Dufour 提交于
      Currently the PTE special supports is turned on in per architecture
      header files.  Most of the time, it is defined in
      arch/*/include/asm/pgtable.h depending or not on some other per
      architecture static definition.
      
      This patch introduce a new configuration variable to manage this
      directly in the Kconfig files.  It would later replace
      __HAVE_ARCH_PTE_SPECIAL.
      
      Here notes for some architecture where the definition of
      __HAVE_ARCH_PTE_SPECIAL is not obvious:
      
      arm
       __HAVE_ARCH_PTE_SPECIAL which is currently defined in
      arch/arm/include/asm/pgtable-3level.h which is included by
      arch/arm/include/asm/pgtable.h when CONFIG_ARM_LPAE is set.
      So select ARCH_HAS_PTE_SPECIAL if ARM_LPAE.
      
      powerpc
      __HAVE_ARCH_PTE_SPECIAL is defined in 2 files:
       - arch/powerpc/include/asm/book3s/64/pgtable.h
       - arch/powerpc/include/asm/pte-common.h
      The first one is included if (PPC_BOOK3S & PPC64) while the second is
      included in all the other cases.
      So select ARCH_HAS_PTE_SPECIAL all the time.
      
      sparc:
      __HAVE_ARCH_PTE_SPECIAL is defined if defined(__sparc__) &&
      defined(__arch64__) which are defined through the compiler in
      sparc/Makefile if !SPARC32 which I assume to be if SPARC64.
      So select ARCH_HAS_PTE_SPECIAL if SPARC64
      
      There is no functional change introduced by this patch.
      
      Link: http://lkml.kernel.org/r/1523433816-14460-2-git-send-email-ldufour@linux.vnet.ibm.comSigned-off-by: NLaurent Dufour <ldufour@linux.vnet.ibm.com>
      Suggested-by: NJerome Glisse <jglisse@redhat.com>
      Reviewed-by: NJerome Glisse <jglisse@redhat.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Rich Felker <dalias@libc.org>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Palmer Dabbelt <palmer@sifive.com>
      Cc: Albert Ou <albert@sifive.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Christophe LEROY <christophe.leroy@c-s.fr>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3010a5ea
    • J
      arm64: topology: Avoid checking numa mask for scheduler MC selection · e156ab71
      Jeremy Linton 提交于
      The numa mask subset check can often lead to system hang or crash during
      CPU hotplug and system suspend operation if NUMA is disabled. This is
      mostly observed on HMP systems where the CPU compute capacities are
      different and ends up in different scheduler domains. Since
      cpumask_of_node is returned instead core_sibling, the scheduler is
      confused with incorrect cpumasks(e.g. one CPU in two different sched
      domains at the same time) on CPU hotplug.
      
      Lets disable the NUMA siblings checks for the time being, as NUMA in
      socket machines have LLC's that will assure that the scheduler topology
      isn't "borken".
      
      The NUMA check exists to assure that if a LLC within a socket crosses
      NUMA nodes/chiplets the scheduler domains remain consistent. This code will
      likely have to be re-enabled in the near future once the NUMA mask story
      is sorted.  At the moment its not necessary because the NUMA in socket
      machines LLC's are contained within the NUMA domains.
      
      Further, as a defensive mechanism during hot-plug, lets assure that the
      LLC siblings are also masked.
      Reported-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Reviewed-by: NSudeep Holla <sudeep.holla@arm.com>
      Tested-by: NGeert Uytterhoeven <geert+renesas@glider.be>
      Signed-off-by: NJeremy Linton <jeremy.linton@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      e156ab71
  13. 05 6月, 2018 1 次提交
    • A
      arm64: cpu_errata: include required headers · 94a5d879
      Arnd Bergmann 提交于
      Without including psci.h and arm-smccc.h, we now get a build failure in
      some configurations:
      
      arch/arm64/kernel/cpu_errata.c: In function 'arm64_update_smccc_conduit':
      arch/arm64/kernel/cpu_errata.c:278:10: error: 'psci_ops' undeclared (first use in this function); did you mean 'sysfs_ops'?
      
      arch/arm64/kernel/cpu_errata.c: In function 'arm64_set_ssbd_mitigation':
      arch/arm64/kernel/cpu_errata.c:311:3: error: implicit declaration of function 'arm_smccc_1_1_hvc' [-Werror=implicit-function-declaration]
         arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      94a5d879
  14. 02 6月, 2018 1 次提交