1. 20 7月, 2018 11 次提交
    • P
      x86/tsc: Redefine notsc to behave as tsc=unstable · fe9af81e
      Pavel Tatashin 提交于
      Currently, the notsc kernel parameter disables the use of the TSC by
      sched_clock(). However, this parameter does not prevent the kernel from
      accessing tsc in other places.
      
      The only rationale to boot with notsc is to avoid timing discrepancies on
      multi-socket systems where TSC are not properly synchronized, and thus
      exclude TSC from being used for time keeping. But that prevents using TSC
      as sched_clock() as well, which is not necessary as the core sched_clock()
      implementation can handle non synchronized TSC based sched clocks just
      fine.
      
      However, there is another method to solve the above problem: booting with
      tsc=unstable parameter. This parameter allows sched_clock() to use TSC and
      just excludes it from timekeeping.
      
      So there is no real reason to keep notsc, but for compatibility reasons the
      parameter has to stay. Make it behave like 'tsc=unstable' instead.
      
      [ tglx: Massaged changelog ]
      Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDou Liyang <douly.fnst@cn.fujitsu.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: steven.sistare@oracle.com
      Cc: daniel.m.jordan@oracle.com
      Cc: linux@armlinux.org.uk
      Cc: schwidefsky@de.ibm.com
      Cc: heiko.carstens@de.ibm.com
      Cc: john.stultz@linaro.org
      Cc: sboyd@codeaurora.org
      Cc: hpa@zytor.com
      Cc: peterz@infradead.org
      Cc: prarit@redhat.com
      Cc: feng.tang@intel.com
      Cc: pmladek@suse.com
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: linux-s390@vger.kernel.org
      Cc: boris.ostrovsky@oracle.com
      Cc: jgross@suse.com
      Cc: pbonzini@redhat.com
      Link: https://lkml.kernel.org/r/20180719205545.16512-12-pasha.tatashin@oracle.com
      fe9af81e
    • B
      x86/CPU: Call detect_nopl() only on the BSP · 9b3661cd
      Borislav Petkov 提交于
      Make it use the setup_* variants and have it be called only on the BSP and
      drop the call in generic_identify() - X86_FEATURE_NOPL will be replicated
      to the APs through the forced caps. Helps to keep the mess at a manageable
      level.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: steven.sistare@oracle.com
      Cc: daniel.m.jordan@oracle.com
      Cc: linux@armlinux.org.uk
      Cc: schwidefsky@de.ibm.com
      Cc: heiko.carstens@de.ibm.com
      Cc: john.stultz@linaro.org
      Cc: sboyd@codeaurora.org
      Cc: hpa@zytor.com
      Cc: douly.fnst@cn.fujitsu.com
      Cc: peterz@infradead.org
      Cc: prarit@redhat.com
      Cc: feng.tang@intel.com
      Cc: pmladek@suse.com
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: linux-s390@vger.kernel.org
      Cc: boris.ostrovsky@oracle.com
      Cc: jgross@suse.com
      Cc: pbonzini@redhat.com
      Link: https://lkml.kernel.org/r/20180719205545.16512-11-pasha.tatashin@oracle.com
      9b3661cd
    • P
      x86/jump_label: Initialize static branching early · 8990cac6
      Pavel Tatashin 提交于
      Static branching is useful to runtime patch branches that are used in hot
      path, but are infrequently changed.
      
      The x86 clock framework is one example that uses static branches to setup
      the best clock during boot and never changes it again.
      
      It is desired to enable the TSC based sched clock early to allow fine
      grained boot time analysis early on. That requires the static branching
      functionality to be functional early as well.
      
      Static branching requires patching nop instructions, thus,
      arch_init_ideal_nops() must be called prior to jump_label_init().
      
      Do all the necessary steps to call arch_init_ideal_nops() right after
      early_cpu_init(), which also allows to insert a call to jump_label_init()
      right after that. jump_label_init() will be called again from the generic
      init code, but the code is protected against reinitialization already.
      
      [ tglx: Massaged changelog ]
      Suggested-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: steven.sistare@oracle.com
      Cc: daniel.m.jordan@oracle.com
      Cc: linux@armlinux.org.uk
      Cc: schwidefsky@de.ibm.com
      Cc: heiko.carstens@de.ibm.com
      Cc: john.stultz@linaro.org
      Cc: sboyd@codeaurora.org
      Cc: hpa@zytor.com
      Cc: douly.fnst@cn.fujitsu.com
      Cc: prarit@redhat.com
      Cc: feng.tang@intel.com
      Cc: pmladek@suse.com
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: linux-s390@vger.kernel.org
      Cc: boris.ostrovsky@oracle.com
      Cc: jgross@suse.com
      Cc: pbonzini@redhat.com
      Link: https://lkml.kernel.org/r/20180719205545.16512-10-pasha.tatashin@oracle.com
      8990cac6
    • P
      x86/alternatives, jumplabel: Use text_poke_early() before mm_init() · 6fffacb3
      Pavel Tatashin 提交于
      It supposed to be safe to modify static branches after jump_label_init().
      But, because static key modifying code eventually calls text_poke() it can
      end up accessing a struct page which has not been initialized yet.
      
      Here is how to quickly reproduce the problem. Insert code like this
      into init/main.c:
      
      | +static DEFINE_STATIC_KEY_FALSE(__test);
      | asmlinkage __visible void __init start_kernel(void)
      | {
      |        char *command_line;
      |@@ -587,6 +609,10 @@ asmlinkage __visible void __init start_kernel(void)
      |        vfs_caches_init_early();
      |        sort_main_extable();
      |        trap_init();
      |+       {
      |+       static_branch_enable(&__test);
      |+       WARN_ON(!static_branch_likely(&__test));
      |+       }
      |        mm_init();
      
      The following warnings show-up:
      WARNING: CPU: 0 PID: 0 at arch/x86/kernel/alternative.c:701 text_poke+0x20d/0x230
      RIP: 0010:text_poke+0x20d/0x230
      Call Trace:
       ? text_poke_bp+0x50/0xda
       ? arch_jump_label_transform+0x89/0xe0
       ? __jump_label_update+0x78/0xb0
       ? static_key_enable_cpuslocked+0x4d/0x80
       ? static_key_enable+0x11/0x20
       ? start_kernel+0x23e/0x4c8
       ? secondary_startup_64+0xa5/0xb0
      
      ---[ end trace abdc99c031b8a90a ]---
      
      If the code above is moved after mm_init(), no warning is shown, as struct
      pages are initialized during handover from memblock.
      
      Use text_poke_early() in static branching until early boot IRQs are enabled
      and from there switch to text_poke. Also, ensure text_poke() is never
      invoked when unitialized memory access may happen by using adding a
      !after_bootmem assertion.
      Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Cc: steven.sistare@oracle.com
      Cc: daniel.m.jordan@oracle.com
      Cc: linux@armlinux.org.uk
      Cc: schwidefsky@de.ibm.com
      Cc: heiko.carstens@de.ibm.com
      Cc: john.stultz@linaro.org
      Cc: sboyd@codeaurora.org
      Cc: hpa@zytor.com
      Cc: douly.fnst@cn.fujitsu.com
      Cc: peterz@infradead.org
      Cc: prarit@redhat.com
      Cc: feng.tang@intel.com
      Cc: pmladek@suse.com
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: linux-s390@vger.kernel.org
      Cc: boris.ostrovsky@oracle.com
      Cc: jgross@suse.com
      Cc: pbonzini@redhat.com
      Link: https://lkml.kernel.org/r/20180719205545.16512-9-pasha.tatashin@oracle.com
      6fffacb3
    • T
      x86/kvmclock: Switch kvmclock data to a PER_CPU variable · 95a3d445
      Thomas Gleixner 提交于
      The previous removal of the memblock dependency from kvmclock introduced a
      static data array sized 64bytes * CONFIG_NR_CPUS. That's wasteful on large
      systems when kvmclock is not used.
      
      Replace it with:
      
       - A static page sized array of pvclock data. It's page sized because the
         pvclock data of the boot cpu is mapped into the VDSO so otherwise random
         other data would be exposed to the vDSO
      
       - A PER_CPU variable of pvclock data pointers. This is used to access the
         pcvlock data storage on each CPU.
      
      The setup is done in two stages:
      
       - Early boot stores the pointer to the static page for the boot CPU in
         the per cpu data.
      
       - In the preparatory stage of CPU hotplug assign either an element of
         the static array (when the CPU number is in that range) or allocate
         memory and initialize the per cpu pointer.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      Cc: steven.sistare@oracle.com
      Cc: daniel.m.jordan@oracle.com
      Cc: linux@armlinux.org.uk
      Cc: schwidefsky@de.ibm.com
      Cc: heiko.carstens@de.ibm.com
      Cc: john.stultz@linaro.org
      Cc: sboyd@codeaurora.org
      Cc: hpa@zytor.com
      Cc: douly.fnst@cn.fujitsu.com
      Cc: peterz@infradead.org
      Cc: prarit@redhat.com
      Cc: feng.tang@intel.com
      Cc: pmladek@suse.com
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: linux-s390@vger.kernel.org
      Cc: boris.ostrovsky@oracle.com
      Cc: jgross@suse.com
      Link: https://lkml.kernel.org/r/20180719205545.16512-8-pasha.tatashin@oracle.com
      95a3d445
    • T
      x86/kvmclock: Move kvmclock vsyscall param and init to kvmclock · e499a9b6
      Thomas Gleixner 提交于
      There is no point to have this in the kvm code itself and call it from
      there. This can be called from an initcall and the parameter is cleared
      when the hypervisor is not KVM.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      Cc: steven.sistare@oracle.com
      Cc: daniel.m.jordan@oracle.com
      Cc: linux@armlinux.org.uk
      Cc: schwidefsky@de.ibm.com
      Cc: heiko.carstens@de.ibm.com
      Cc: john.stultz@linaro.org
      Cc: sboyd@codeaurora.org
      Cc: hpa@zytor.com
      Cc: douly.fnst@cn.fujitsu.com
      Cc: peterz@infradead.org
      Cc: prarit@redhat.com
      Cc: feng.tang@intel.com
      Cc: pmladek@suse.com
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: linux-s390@vger.kernel.org
      Cc: boris.ostrovsky@oracle.com
      Cc: jgross@suse.com
      Link: https://lkml.kernel.org/r/20180719205545.16512-7-pasha.tatashin@oracle.com
      e499a9b6
    • T
      x86/kvmclock: Mark variables __initdata and __ro_after_init · 42f8df93
      Thomas Gleixner 提交于
      The kvmclock parameter is init data and the other variables are not
      modified after init.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      Cc: steven.sistare@oracle.com
      Cc: daniel.m.jordan@oracle.com
      Cc: linux@armlinux.org.uk
      Cc: schwidefsky@de.ibm.com
      Cc: heiko.carstens@de.ibm.com
      Cc: john.stultz@linaro.org
      Cc: sboyd@codeaurora.org
      Cc: hpa@zytor.com
      Cc: douly.fnst@cn.fujitsu.com
      Cc: peterz@infradead.org
      Cc: prarit@redhat.com
      Cc: feng.tang@intel.com
      Cc: pmladek@suse.com
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: linux-s390@vger.kernel.org
      Cc: boris.ostrovsky@oracle.com
      Cc: jgross@suse.com
      Link: https://lkml.kernel.org/r/20180719205545.16512-6-pasha.tatashin@oracle.com
      42f8df93
    • T
      x86/kvmclock: Cleanup the code · 146c394d
      Thomas Gleixner 提交于
      - Cleanup the mrs write for wall clock. The type casts to (int) are sloppy
        because the wrmsr parameters are u32 and aside of that wrmsrl() already
        provides the high/low split for free.
      
      - Remove the pointless get_cpu()/put_cpu() dance from various
        functions. Either they are called during early init where CPU is
        guaranteed to be 0 or they are already called from non preemptible
        context where smp_processor_id() can be used safely
      
      - Simplify the convoluted check for kvmclock in the init function.
      
      - Mark the parameter parsing function __init. No point in keeping it
        around.
      
      - Convert to pr_info()
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      Cc: steven.sistare@oracle.com
      Cc: daniel.m.jordan@oracle.com
      Cc: linux@armlinux.org.uk
      Cc: schwidefsky@de.ibm.com
      Cc: heiko.carstens@de.ibm.com
      Cc: john.stultz@linaro.org
      Cc: sboyd@codeaurora.org
      Cc: hpa@zytor.com
      Cc: douly.fnst@cn.fujitsu.com
      Cc: peterz@infradead.org
      Cc: prarit@redhat.com
      Cc: feng.tang@intel.com
      Cc: pmladek@suse.com
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: linux-s390@vger.kernel.org
      Cc: boris.ostrovsky@oracle.com
      Cc: jgross@suse.com
      Link: https://lkml.kernel.org/r/20180719205545.16512-5-pasha.tatashin@oracle.com
      146c394d
    • T
      x86/kvmclock: Decrapify kvm_register_clock() · 7a5ddc8f
      Thomas Gleixner 提交于
      The return value is pointless because the wrmsr cannot fail if
      KVM_FEATURE_CLOCKSOURCE or KVM_FEATURE_CLOCKSOURCE2 are set.
      
      kvm_register_clock() is only called locally so wants to be static.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      Cc: steven.sistare@oracle.com
      Cc: daniel.m.jordan@oracle.com
      Cc: linux@armlinux.org.uk
      Cc: schwidefsky@de.ibm.com
      Cc: heiko.carstens@de.ibm.com
      Cc: john.stultz@linaro.org
      Cc: sboyd@codeaurora.org
      Cc: hpa@zytor.com
      Cc: douly.fnst@cn.fujitsu.com
      Cc: peterz@infradead.org
      Cc: prarit@redhat.com
      Cc: feng.tang@intel.com
      Cc: pmladek@suse.com
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: linux-s390@vger.kernel.org
      Cc: boris.ostrovsky@oracle.com
      Cc: jgross@suse.com
      Link: https://lkml.kernel.org/r/20180719205545.16512-4-pasha.tatashin@oracle.com
      7a5ddc8f
    • T
      x86/kvmclock: Remove page size requirement from wall_clock · 7ef363a3
      Thomas Gleixner 提交于
      There is no requirement for wall_clock data to be page aligned or page
      sized.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      Cc: steven.sistare@oracle.com
      Cc: daniel.m.jordan@oracle.com
      Cc: linux@armlinux.org.uk
      Cc: schwidefsky@de.ibm.com
      Cc: heiko.carstens@de.ibm.com
      Cc: john.stultz@linaro.org
      Cc: sboyd@codeaurora.org
      Cc: hpa@zytor.com
      Cc: douly.fnst@cn.fujitsu.com
      Cc: peterz@infradead.org
      Cc: prarit@redhat.com
      Cc: feng.tang@intel.com
      Cc: pmladek@suse.com
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: linux-s390@vger.kernel.org
      Cc: boris.ostrovsky@oracle.com
      Cc: jgross@suse.com
      Link: https://lkml.kernel.org/r/20180719205545.16512-3-pasha.tatashin@oracle.com
      7ef363a3
    • P
      x86/kvmclock: Remove memblock dependency · 368a540e
      Pavel Tatashin 提交于
      KVM clock is initialized later compared to other hypervisor clocks because
      it has a dependency on the memblock allocator.
      
      Bring it in line with other hypervisors by using memory from the BSS
      instead of allocating it.
      
      The benefits:
      
        - Remove ifdef from common code
        - Earlier availability of the clock
        - Remove dependency on memblock, and reduce code
      
      The downside:
      
        - Static allocation of the per cpu data structures sized NR_CPUS * 64byte
          Will be addressed in follow up patches.
      
      [ tglx: Split out from larger series ]
      Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      Cc: steven.sistare@oracle.com
      Cc: daniel.m.jordan@oracle.com
      Cc: linux@armlinux.org.uk
      Cc: schwidefsky@de.ibm.com
      Cc: heiko.carstens@de.ibm.com
      Cc: john.stultz@linaro.org
      Cc: sboyd@codeaurora.org
      Cc: hpa@zytor.com
      Cc: douly.fnst@cn.fujitsu.com
      Cc: peterz@infradead.org
      Cc: prarit@redhat.com
      Cc: feng.tang@intel.com
      Cc: pmladek@suse.com
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: linux-s390@vger.kernel.org
      Cc: boris.ostrovsky@oracle.com
      Cc: jgross@suse.com
      Link: https://lkml.kernel.org/r/20180719205545.16512-2-pasha.tatashin@oracle.com
      368a540e
  2. 18 7月, 2018 2 次提交
    • P
      kvmclock: fix TSC calibration for nested guests · e10f7805
      Peng Hao 提交于
      Inside a nested guest, access to hardware can be slow enough that
      tsc_read_refs always return ULLONG_MAX, causing tsc_refine_calibration_work
      to be called periodically and the nested guest to spend a lot of time
      reading the ACPI timer.
      
      However, if the TSC frequency is available from the pvclock page,
      we can just set X86_FEATURE_TSC_KNOWN_FREQ and avoid the recalibration.
      'refine' operation.
      Suggested-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NPeng Hao <peng.hao2@zte.com.cn>
      [Commit message rewritten. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e10f7805
    • L
      KVM: VMX: Mark VMXArea with revision_id of physical CPU even when eVMCS enabled · 2307af1c
      Liran Alon 提交于
      When eVMCS is enabled, all VMCS allocated to be used by KVM are marked
      with revision_id of KVM_EVMCS_VERSION instead of revision_id reported
      by MSR_IA32_VMX_BASIC.
      
      However, even though not explictly documented by TLFS, VMXArea passed
      as VMXON argument should still be marked with revision_id reported by
      physical CPU.
      
      This issue was found by the following setup:
      * L0 = KVM which expose eVMCS to it's L1 guest.
      * L1 = KVM which consume eVMCS reported by L0.
      This setup caused the following to occur:
      1) L1 execute hardware_enable().
      2) hardware_enable() calls kvm_cpu_vmxon() to execute VMXON.
      3) L0 intercept L1 VMXON and execute handle_vmon() which notes
      vmxarea->revision_id != VMCS12_REVISION and therefore fails with
      nested_vmx_failInvalid() which sets RFLAGS.CF.
      4) L1 kvm_cpu_vmxon() don't check RFLAGS.CF for failure and therefore
      hardware_enable() continues as usual.
      5) L1 hardware_enable() then calls ept_sync_global() which executes
      INVEPT.
      6) L0 intercept INVEPT and execute handle_invept() which notes
      !vmx->nested.vmxon and thus raise a #UD to L1.
      7) Raised #UD caused L1 to panic.
      Reviewed-by: NKrish Sadhukhan <krish.sadhukhan@oracle.com>
      Cc: stable@vger.kernel.org
      Fixes: 773e8a04Signed-off-by: NLiran Alon <liran.alon@oracle.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      2307af1c
  3. 15 7月, 2018 6 次提交
  4. 13 7月, 2018 1 次提交
  5. 12 7月, 2018 5 次提交
  6. 11 7月, 2018 2 次提交
    • A
      efi/x86: Fix mixed mode reboot loop by removing pointless call to PciIo->Attributes() · e2967018
      Ard Biesheuvel 提交于
      Hans de Goede reported that his mixed EFI mode Bay Trail tablet
      would not boot at all any more, but enter a reboot loop without
      any logs printed by the kernel.
      
      Unbreak 64-bit Linux/x86 on 32-bit UEFI:
      
      When it was first introduced, the EFI stub code that copies the
      contents of PCI option ROMs originally only intended to do so if
      the EFI_PCI_IO_ATTRIBUTE_EMBEDDED_ROM attribute was *not* set.
      
      The reason was that the UEFI spec permits PCI option ROM images
      to be provided by the platform directly, rather than via the ROM
      BAR, and in this case, the OS can only access them at runtime if
      they are preserved at boot time by copying them from the areas
      described by PciIo->RomImage and PciIo->RomSize.
      
      However, it implemented this check erroneously, as can be seen in
      commit:
      
        dd5fc854 ("EFI: Stash ROMs if they're not in the PCI BAR")
      
      which introduced:
      
          if (!attributes & EFI_PCI_IO_ATTRIBUTE_EMBEDDED_ROM)
                  continue;
      
      and given that the numeric value of EFI_PCI_IO_ATTRIBUTE_EMBEDDED_ROM
      is 0x4000, this condition never becomes true, and so the option ROMs
      were copied unconditionally.
      
      This was spotted and 'fixed' by commit:
      
        886d751a ("x86, efi: correct precedence of operators in setup_efi_pci")
      
      but inadvertently inverted the logic at the same time, defeating
      the purpose of the code, since it now only preserves option ROM
      images that can be read from the ROM BAR as well.
      
      Unsurprisingly, this broke some systems, and so the check was removed
      entirely in the following commit:
      
        73970188 ("x86, efi: remove attribute check from setup_efi_pci")
      
      It is debatable whether this check should have been included in the
      first place, since the option ROM image provided to the UEFI driver by
      the firmware may be different from the one that is actually present in
      the card's flash ROM, and so whatever PciIo->RomImage points at should
      be preferred regardless of whether the attribute is set.
      
      As this was the only use of the attributes field, we can remove
      the call to PciIo->Attributes() entirely, which is especially
      nice because its prototype involves uint64_t type by-value
      arguments which the EFI mixed mode has trouble dealing with.
      
      Any mixed mode system with PCI is likely to be affected.
      Tested-by: NWilfried Klaebe <linux-kernel@lebenslange-mailadresse.de>
      Tested-by: NHans de Goede <hdegoede@redhat.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-efi@vger.kernel.org
      Link: http://lkml.kernel.org/r/20180711090235.9327-2-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e2967018
    • V
      ARM: 8775/1: NOMMU: Use instr_sync instead of plain isb in common code · cea39477
      Vladimir Murzin 提交于
      Greg reported that commit 3c241210 ("ARM: 8756/1: NOMMU: Postpone
      MPU activation till __after_proc_init") is causing breakage for the
      old Versatile platform in no-MMU mode (with out-of-tree patches):
      
        AS      arch/arm/kernel/head-nommu.o
      arch/arm/kernel/head-nommu.S: Assembler messages:
      arch/arm/kernel/head-nommu.S:180: Error: selected processor does not support `isb' in ARM mode
      scripts/Makefile.build:417: recipe for target 'arch/arm/kernel/head-nommu.o' failed
      make[2]: *** [arch/arm/kernel/head-nommu.o] Error 1
      Makefile:1034: recipe for target 'arch/arm/kernel' failed
      make[1]: *** [arch/arm/kernel] Error 2
      
      Since the code is common for all NOMMU builds usage of the isb was a
      bad idea (please, note that isb also used in MPU related code which is
      fine because MPU has dependency on CPU_V7/CPU_V7M), instead use more
      robust instr_sync assembler macro.
      
      Fixes: 3c241210 ("ARM: 8756/1: NOMMU: Postpone MPU activation till __after_proc_init")
      Reported-by: NGreg Ungerer <gerg@kernel.org>
      Tested-by: NGreg Ungerer <gerg@kernel.org>
      Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      cea39477
  7. 10 7月, 2018 1 次提交
    • L
      Revert "arm64: Use aarch64elf and aarch64elfb emulation mode variants" · 96f95a17
      Laura Abbott 提交于
      This reverts commit 38fc4248.
      
      Distributions such as Fedora and Debian do not package the ELF linker
      scripts with their toolchains, resulting in kernel build failures such
      as:
      
        |   CHK     include/generated/compile.h
        |   LD [M]  arch/arm64/crypto/sha512-ce.o
        | aarch64-linux-gnu-ld: cannot open linker script file ldscripts/aarch64elf.xr: No such file or directory
        | make[1]: *** [scripts/Makefile.build:530: arch/arm64/crypto/sha512-ce.o] Error 1
        | make: *** [Makefile:1029: arch/arm64/crypto] Error 2
      
      Revert back to the linux targets for now, adding a comment to the Makefile
      so we don't accidentally break this in the future.
      
      Cc: Paul Kocialkowski <contact@paulk.fr>
      Cc: <stable@vger.kernel.org>
      Fixes: 38fc4248 ("arm64: Use aarch64elf and aarch64elfb emulation mode variants")
      Tested-by: NKevin Hilman <khilman@baylibre.com>
      Signed-off-by: NLaura Abbott <labbott@redhat.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      96f95a17
  8. 08 7月, 2018 1 次提交
  9. 07 7月, 2018 1 次提交
  10. 06 7月, 2018 4 次提交
    • B
      ARM: dts: armada-38x: use the new thermal binding · 568cc2f0
      Baruch Siach 提交于
      Commit 2f28e4c2 (thermal: armada: Clarify control registers
      accesses) introduced the new thermal binding. The new binding extends
      the second registers field size to 8. Switch to the new binding to fix
      thermal reading values. Without this change the fix for errata #132698
      introduced in commit 8c0b888f (thermal: armada: Change sensors trim
      default value) has no effect.
      
      Cc: stable@vger.kernel.org # v4.16+
      Reviewed-by: NMiquel Raynal <miquel.raynal@bootlin.com>
      Signed-off-by: NBaruch Siach <baruch@tkos.co.il>
      Signed-off-by: NGregory CLEMENT <gregory.clement@bootlin.com>
      568cc2f0
    • K
      x86/hyper-v: Fix the circular dependency in IPI enlightenment · 1268ed0c
      K. Y. Srinivasan 提交于
      The IPI hypercalls depend on being able to map the Linux notion of CPU ID
      to the hypervisor's notion of the CPU ID. The array hv_vp_index[] provides
      this mapping. Code for populating this array depends on the IPI functionality.
      Break this circular dependency.
      
      [ tglx: Use a proper define instead of '-1' with a u32 variable as pointed
        	out by Vitaly ]
      
      Fixes: 68bb7bfb ("X86/Hyper-V: Enable IPI enlightenments")
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NMichael Kelley <mikelley@microsoft.com>
      Cc: gregkh@linuxfoundation.org
      Cc: devel@linuxdriverproject.org
      Cc: olaf@aepfle.de
      Cc: apw@canonical.com
      Cc: jasowang@redhat.com
      Cc: hpa@zytor.com
      Cc: sthemmin@microsoft.com
      Cc: Michael.H.Kelley@microsoft.com
      Cc: vkuznets@redhat.com
      Link: https://lkml.kernel.org/r/20180703230155.15160-1-kys@linuxonhyperv.com
      
      1268ed0c
    • P
      MIPS: Fix ioremap() RAM check · 523402fa
      Paul Burton 提交于
      We currently attempt to check whether a physical address range provided
      to __ioremap() may be in use by the page allocator by examining the
      value of PageReserved for each page in the region - lowmem pages not
      marked reserved are presumed to be in use by the page allocator, and
      requests to ioremap them fail.
      
      The way we check this has been broken since commit 92923ca3 ("mm:
      meminit: only set page reserved in the memblock region"), because
      memblock will typically not have any knowledge of non-RAM pages and
      therefore those pages will not have the PageReserved flag set. Thus when
      we attempt to ioremap a region outside of RAM we incorrectly fail
      believing that the region is RAM that may be in use.
      
      In most cases ioremap() on MIPS will take a fast-path to use the
      unmapped kseg1 or xkphys virtual address spaces and never hit this path,
      so the only way to hit it is for a MIPS32 system to attempt to ioremap()
      an address range in lowmem with flags other than _CACHE_UNCACHED.
      Perhaps the most straightforward way to do this is using
      ioremap_uncached_accelerated(), which is how the problem was discovered.
      
      Fix this by making use of walk_system_ram_range() to test the address
      range provided to __ioremap() against only RAM pages, rather than all
      lowmem pages. This means that if we have a lowmem I/O region, which is
      very common for MIPS systems, we're free to ioremap() address ranges
      within it. A nice bonus is that the test is no longer limited to lowmem.
      
      The approach here matches the way x86 performed the same test after
      commit c81c8a1e ("x86, ioremap: Speed up check for RAM pages") until
      x86 moved towards a slightly more complicated check using walk_mem_res()
      for unrelated reasons with commit 0e4c12b4 ("x86/mm, resource: Use
      PAGE_KERNEL protection for ioremap of memory pages").
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Reported-by: NSerge Semin <fancer.lancer@gmail.com>
      Tested-by: NSerge Semin <fancer.lancer@gmail.com>
      Fixes: 92923ca3 ("mm: meminit: only set page reserved in the memblock region")
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: stable@vger.kernel.org # v4.2+
      Patchwork: https://patchwork.linux-mips.org/patch/19786/
      523402fa
    • G
      arm64: remove no-op -p linker flag · 1a381d4a
      Greg Hackmann 提交于
      Linking the ARM64 defconfig kernel with LLVM lld fails with the error:
      
        ld.lld: error: unknown argument: -p
        Makefile:1015: recipe for target 'vmlinux' failed
      
      Without this flag, the ARM64 defconfig kernel successfully links with
      lld and boots on Dragonboard 410c.
      
      After digging through binutils source and changelogs, it turns out that
      -p is only relevant to ancient binutils installations targeting 32-bit
      ARM.  binutils accepts -p for AArch64 too, but it's always been
      undocumented and silently ignored.  A comment in
      ld/emultempl/aarch64elf.em explains that it's "Only here for backwards
      compatibility".
      
      Since this flag is a no-op on ARM64, we can safely drop it.
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Reviewed-by: NNick Desaulniers <ndesaulniers@google.com>
      Signed-off-by: NGreg Hackmann <ghackmann@google.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1a381d4a
  11. 05 7月, 2018 6 次提交