1. 07 8月, 2018 2 次提交
    • J
      xen: don't use privcmd_call() from xen_mc_flush() · cd913922
      Juergen Gross 提交于
      Using privcmd_call() for a singleton multicall seems to be wrong, as
      privcmd_call() is using stac()/clac() to enable hypervisor access to
      Linux user space.
      
      Even if currently not a problem (pv domains can't use SMAP while HVM
      and PVH domains can't use multicalls) things might change when
      PVH dom0 support is added to the kernel.
      Reported-by: NJan Beulich <jbeulich@suse.com>
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      Reviewed-by: NJan Beulich <jbeulich@suse.com>
      Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      cd913922
    • M
      xen/pv: Call get_cpu_address_sizes to set x86_virt/phys_bits · 405c018a
      M. Vefa Bicakci 提交于
      Commit d94a155c ("x86/cpu: Prevent cpuinfo_x86::x86_phys_bits
      adjustment corruption") has moved the query and calculation of the
      x86_virt_bits and x86_phys_bits fields of the cpuinfo_x86 struct
      from the get_cpu_cap function to a new function named
      get_cpu_address_sizes.
      
      One of the call sites related to Xen PV VMs was unfortunately missed
      in the aforementioned commit. This prevents successful boot-up of
      kernel versions 4.17 and up in Xen PV VMs if CONFIG_DEBUG_VIRTUAL
      is enabled, due to the following code path:
      
        enlighten_pv.c::xen_start_kernel
          mmu_pv.c::xen_reserve_special_pages
            page.h::__pa
              physaddr.c::__phys_addr
                physaddr.h::phys_addr_valid
      
      phys_addr_valid uses boot_cpu_data.x86_phys_bits to validate physical
      addresses. boot_cpu_data.x86_phys_bits is no longer populated before
      the call to xen_reserve_special_pages due to the aforementioned commit
      though, so the validation performed by phys_addr_valid fails, which
      causes __phys_addr to trigger a BUG, preventing boot-up.
      Signed-off-by: NM. Vefa Bicakci <m.v.b@runbox.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: xen-devel@lists.xenproject.org
      Cc: x86@kernel.org
      Cc: stable@vger.kernel.org # for v4.17 and up
      Fixes: d94a155c ("x86/cpu: Prevent cpuinfo_x86::x86_phys_bits adjustment corruption")
      Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      405c018a
  2. 05 8月, 2018 1 次提交
    • N
      x86: Don't include linux/irq.h from asm/hardirq.h · 447ae316
      Nicolai Stange 提交于
      The next patch in this series will have to make the definition of
      irq_cpustat_t available to entering_irq().
      
      Inclusion of asm/hardirq.h into asm/apic.h would cause circular header
      dependencies like
      
        asm/smp.h
          asm/apic.h
            asm/hardirq.h
              linux/irq.h
                linux/topology.h
                  linux/smp.h
                    asm/smp.h
      
      or
      
        linux/gfp.h
          linux/mmzone.h
            asm/mmzone.h
              asm/mmzone_64.h
                asm/smp.h
                  asm/apic.h
                    asm/hardirq.h
                      linux/irq.h
                        linux/irqdesc.h
                          linux/kobject.h
                            linux/sysfs.h
                              linux/kernfs.h
                                linux/idr.h
                                  linux/gfp.h
      
      and others.
      
      This causes compilation errors because of the header guards becoming
      effective in the second inclusion: symbols/macros that had been defined
      before wouldn't be available to intermediate headers in the #include chain
      anymore.
      
      A possible workaround would be to move the definition of irq_cpustat_t
      into its own header and include that from both, asm/hardirq.h and
      asm/apic.h.
      
      However, this wouldn't solve the real problem, namely asm/harirq.h
      unnecessarily pulling in all the linux/irq.h cruft: nothing in
      asm/hardirq.h itself requires it. Also, note that there are some other
      archs, like e.g. arm64, which don't have that #include in their
      asm/hardirq.h.
      
      Remove the linux/irq.h #include from x86' asm/hardirq.h.
      
      Fix resulting compilation errors by adding appropriate #includes to *.c
      files as needed.
      
      Note that some of these *.c files could be cleaned up a bit wrt. to their
      set of #includes, but that should better be done from separate patches, if
      at all.
      Signed-off-by: NNicolai Stange <nstange@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      447ae316
  3. 27 7月, 2018 1 次提交
  4. 20 7月, 2018 2 次提交
    • P
      x86/xen/time: Output xen sched_clock time from 0 · 38669ba2
      Pavel Tatashin 提交于
      It is expected for sched_clock() to output data from 0, when system boots.
      
      Add an offset xen_sched_clock_offset (similarly how it is done in other
      hypervisors i.e. kvm_sched_clock_offset) to count sched_clock() from 0,
      when time is first initialized.
      Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: steven.sistare@oracle.com
      Cc: daniel.m.jordan@oracle.com
      Cc: linux@armlinux.org.uk
      Cc: schwidefsky@de.ibm.com
      Cc: heiko.carstens@de.ibm.com
      Cc: john.stultz@linaro.org
      Cc: sboyd@codeaurora.org
      Cc: hpa@zytor.com
      Cc: douly.fnst@cn.fujitsu.com
      Cc: peterz@infradead.org
      Cc: prarit@redhat.com
      Cc: feng.tang@intel.com
      Cc: pmladek@suse.com
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: linux-s390@vger.kernel.org
      Cc: boris.ostrovsky@oracle.com
      Cc: jgross@suse.com
      Cc: pbonzini@redhat.com
      Link: https://lkml.kernel.org/r/20180719205545.16512-14-pasha.tatashin@oracle.com
      38669ba2
    • P
      x86/xen/time: Initialize pv xen time in init_hypervisor_platform() · 7b25b9cb
      Pavel Tatashin 提交于
      In every hypervisor except for xen pv time ops are initialized in
      init_hypervisor_platform().
      
      Xen PV domains initialize time ops in x86_init.paging.pagetable_init(),
      by calling xen_setup_shared_info() which is a poor design, as time is
      needed prior to memory allocator.
      
      xen_setup_shared_info() is called from two places: during boot, and
      after suspend. Split the content of xen_setup_shared_info() into
      three places:
      
      1. add the clock relavent data into new xen pv init_platform vector, and
         set clock ops in there.
      
      2. move xen_setup_vcpu_info_placement() to new xen_pv_guest_late_init()
         call.
      
      3. Re-initializing parts of shared info copy to xen_pv_post_suspend() to
         be symmetric to xen_pv_pre_suspend
      Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: steven.sistare@oracle.com
      Cc: daniel.m.jordan@oracle.com
      Cc: linux@armlinux.org.uk
      Cc: schwidefsky@de.ibm.com
      Cc: heiko.carstens@de.ibm.com
      Cc: john.stultz@linaro.org
      Cc: sboyd@codeaurora.org
      Cc: hpa@zytor.com
      Cc: douly.fnst@cn.fujitsu.com
      Cc: peterz@infradead.org
      Cc: prarit@redhat.com
      Cc: feng.tang@intel.com
      Cc: pmladek@suse.com
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: linux-s390@vger.kernel.org
      Cc: boris.ostrovsky@oracle.com
      Cc: jgross@suse.com
      Cc: pbonzini@redhat.com
      Link: https://lkml.kernel.org/r/20180719205545.16512-13-pasha.tatashin@oracle.com
      7b25b9cb
  5. 13 7月, 2018 1 次提交
  6. 12 7月, 2018 1 次提交
  7. 21 6月, 2018 1 次提交
  8. 19 6月, 2018 1 次提交
  9. 04 6月, 2018 2 次提交
  10. 19 5月, 2018 1 次提交
  11. 15 5月, 2018 1 次提交
  12. 14 5月, 2018 2 次提交
    • P
      xen/privcmd: add IOCTL_PRIVCMD_MMAP_RESOURCE · 3ad08765
      Paul Durrant 提交于
      My recent Xen patch series introduces a new HYPERVISOR_memory_op to
      support direct priv-mapping of certain guest resources (such as ioreq
      pages, used by emulators) by a tools domain, rather than having to access
      such resources via the guest P2M.
      
      This patch adds the necessary infrastructure to the privcmd driver and
      Xen MMU code to support direct resource mapping.
      
      NOTE: The adjustment in the MMU code is partially cosmetic. Xen will now
            allow a PV tools domain to map guest pages either by GFN or MFN, thus
            the term 'mfn' has been swapped for 'pfn' in the lower layers of the
            remap code.
      Signed-off-by: NPaul Durrant <paul.durrant@citrix.com>
      Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      3ad08765
    • D
      x86/xen/efi: Initialize UEFI secure boot state during dom0 boot · a7012bdb
      Daniel Kiper 提交于
      Initialize UEFI secure boot state during dom0 boot. Otherwise the kernel
      may not even know that it runs on secure boot enabled platform.
      
      Note that part of drivers/firmware/efi/libstub/secureboot.c is duplicated
      by this patch, only in this case, it runs in the context of the kernel
      proper rather than UEFI boot context. The reason for the duplication is
      that maintaining the original code to run correctly on ARM/arm64 as well
      as on all the quirky x86 firmware we support is enough of a burden as it
      is, and adding the x86/Xen execution context to that mix just so we can
      reuse a single routine just isn't worth it.
      
      [ardb: explain rationale for code duplication]
      Signed-off-by: NDaniel Kiper <daniel.kiper@oracle.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-efi@vger.kernel.org
      Link: http://lkml.kernel.org/r/20180504060003.19618-2-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a7012bdb
  13. 08 5月, 2018 1 次提交
    • V
      x86/xen: Reset VCPU0 info pointer after shared_info remap · d1ecfa9d
      van der Linden, Frank 提交于
      This patch fixes crashes during boot for HVM guests on older (pre HVM
      vector callback) Xen versions. Without this, current kernels will always
      fail to boot on those Xen versions.
      
      Sample stack trace:
      
         BUG: unable to handle kernel paging request at ffffffffff200000
         IP: __xen_evtchn_do_upcall+0x1e/0x80
         PGD 1e0e067 P4D 1e0e067 PUD 1e10067 PMD 235c067 PTE 0
          Oops: 0002 [#1] SMP PTI
         Modules linked in:
         CPU: 0 PID: 512 Comm: kworker/u2:0 Not tainted 4.14.33-52.13.amzn1.x86_64 #1
         Hardware name: Xen HVM domU, BIOS 3.4.3.amazon 11/11/2016
         task: ffff88002531d700 task.stack: ffffc90000480000
         RIP: 0010:__xen_evtchn_do_upcall+0x1e/0x80
         RSP: 0000:ffff880025403ef0 EFLAGS: 00010046
         RAX: ffffffff813cc760 RBX: ffffffffff200000 RCX: ffffc90000483ef0
         RDX: ffff880020540a00 RSI: ffff880023c78000 RDI: 000000000000001c
         RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000000
         R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
         R13: ffff880025403f5c R14: 0000000000000000 R15: 0000000000000000
         FS:  0000000000000000(0000) GS:ffff880025400000(0000) knlGS:0000000000000000
         CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
         CR2: ffffffffff200000 CR3: 0000000001e0a000 CR4: 00000000000006f0
          Call Trace:
         <IRQ>
         do_hvm_evtchn_intr+0xa/0x10
         __handle_irq_event_percpu+0x43/0x1a0
         handle_irq_event_percpu+0x20/0x50
         handle_irq_event+0x39/0x60
         handle_fasteoi_irq+0x80/0x140
         handle_irq+0xaf/0x120
         do_IRQ+0x41/0xd0
         common_interrupt+0x7d/0x7d
         </IRQ>
      
      During boot, the HYPERVISOR_shared_info page gets remapped to make it work
      with KASLR. This means that any pointer derived from it needs to be
      adjusted.
      
      The only value that this applies to is the vcpu_info pointer for VCPU 0.
      For PV and HVM with the callback vector feature, this gets done via the
      smp_ops prepare_boot_cpu callback. Older Xen versions do not support the
      HVM callback vector, so there is no Xen-specific smp_ops set up in that
      scenario. So, the vcpu_info pointer for VCPU 0 never gets set to the proper
      value, and the first reference of it will be bad. Fix this by resetting it
      immediately after the remap.
      Signed-off-by: NFrank van der Linden <fllinden@amazon.com>
      Reviewed-by: NEduardo Valentin <eduval@amazon.com>
      Reviewed-by: NAlakesh Haloi <alakeshh@amazon.com>
      Reviewed-by: NVallish Vaidyeshwara <vallish@amazon.com>
      Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: xen-devel@lists.xenproject.org
      Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      d1ecfa9d
  14. 20 4月, 2018 1 次提交
  15. 12 4月, 2018 1 次提交
    • P
      xen, mm: allow deferred page initialization for xen pv domains · 6f84f8d1
      Pavel Tatashin 提交于
      Juergen Gross noticed that commit f7f99100 ("mm: stop zeroing memory
      during allocation in vmemmap") broke XEN PV domains when deferred struct
      page initialization is enabled.
      
      This is because the xen's PagePinned() flag is getting erased from
      struct pages when they are initialized later in boot.
      
      Juergen fixed this problem by disabling deferred pages on xen pv
      domains.  It is desirable, however, to have this feature available as it
      reduces boot time.  This fix re-enables the feature for pv-dmains, and
      fixes the problem the following way:
      
      The fix is to delay setting PagePinned flag until struct pages for all
      allocated memory are initialized, i.e.  until after free_all_bootmem().
      
      A new x86_init.hyper op init_after_bootmem() is called to let xen know
      that boot allocator is done, and hence struct pages for all the
      allocated memory are now initialized.  If deferred page initialization
      is enabled, the rest of struct pages are going to be initialized later
      in boot once page_alloc_init_late() is called.
      
      xen_after_bootmem() walks page table's pages and marks them pinned.
      
      Link: http://lkml.kernel.org/r/20180226160112.24724-2-pasha.tatashin@oracle.comSigned-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Reviewed-by: NJuergen Gross <jgross@suse.com>
      Tested-by: NJuergen Gross <jgross@suse.com>
      Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
      Cc: Pavel Tatashin <pasha.tatashin@oracle.com>
      Cc: Alok Kataria <akataria@vmware.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Laura Abbott <labbott@redhat.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Mathias Krause <minipli@googlemail.com>
      Cc: Jinbum Park <jinb.park7@gmail.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Jia Zhang <zhang.jia@linux.alibaba.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Stefano Stabellini <sstabellini@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6f84f8d1
  16. 10 4月, 2018 2 次提交
    • L
      x86/apic: Fix signedness bug in APIC ID validity checks · a774635d
      Li RongQing 提交于
      The APIC ID as parsed from ACPI MADT is validity checked with the
      apic->apic_id_valid() callback, which depends on the selected APIC type.
      
      For non X2APIC types APIC IDs >= 0xFF are invalid, but values > 0x7FFFFFFF
      are detected as valid. This happens because the 'apicid' argument of the
      apic_id_valid() callback is type 'int'. So the resulting comparison
      
         apicid < 0xFF
      
      evaluates to true for all unsigned int values > 0x7FFFFFFF which are handed
      to default_apic_id_valid(). As a consequence, invalid APIC IDs in !X2APIC
      mode are considered valid and accounted as possible CPUs.
      
      Change the apicid argument type of the apic_id_valid() callback to u32 so
      the evaluation is unsigned and returns the correct result.
      
      [ tglx: Massaged changelog ]
      Signed-off-by: NLi RongQing <lirongqing@baidu.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Cc: jgross@suse.com
      Cc: Dou Liyang <douly.fnst@cn.fujitsu.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: hpa@zytor.com
      Link: https://lkml.kernel.org/r/1523322966-10296-1-git-send-email-lirongqing@baidu.com
      a774635d
    • B
      xen/pvh: Indicate XENFEAT_linux_rsdp_unrestricted to Xen · a5a18ae7
      Boris Ostrovsky 提交于
      Pre-4.17 kernels ignored start_info's rsdp_paddr pointer and instead
      relied on finding RSDP in standard location in BIOS RO memory. This
      has worked since that's where Xen used to place it.
      
      However, with recent Xen change (commit 4a5733771e6f ("libxl: put RSDP
      for PVH guest near 4GB")) it prefers to keep RSDP at a "non-standard"
      address. Even though as of commit b17d9d1d ("x86/xen: Add pvh
      specific rsdp address retrieval function") Linux is able to find RSDP,
      for back-compatibility reasons we need to indicate to Xen that we can
      handle this, an we do so by setting XENFEAT_linux_rsdp_unrestricted
      flag in ELF notes.
      
      (Also take this opportunity and sync features.h header file with Xen)
      Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Reviewed-by: NJuergen Gross <jgross@suse.com>
      Reviewed-by: NWei Liu <wei.liu2@citrix.com>
      a5a18ae7
  17. 06 4月, 2018 1 次提交
    • R
      time: tick-sched: Reorganize idle tick management code · 0e776768
      Rafael J. Wysocki 提交于
      Prepare the scheduler tick code for reworking the idle loop to
      avoid stopping the tick in some cases.
      
      The idea is to split the nohz idle entry call to decouple the idle
      time stats accounting and preparatory work from the actual tick stop
      code, in order to later be able to delay the tick stop once we reach
      more power-knowledgeable callers.
      
      Move away the tick_nohz_start_idle() invocation from
      __tick_nohz_idle_enter(), rename the latter to
      __tick_nohz_idle_stop_tick() and define tick_nohz_idle_stop_tick()
      as a wrapper around it for calling it from the outside.
      
      Make tick_nohz_idle_enter() only call tick_nohz_start_idle() instead
      of calling the entire __tick_nohz_idle_enter(), add another wrapper
      disabling and enabling interrupts around tick_nohz_idle_stop_tick()
      and make the current callers of tick_nohz_idle_enter() call it too
      to retain their current functionality.
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reviewed-by: NFrederic Weisbecker <frederic@kernel.org>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      0e776768
  18. 21 3月, 2018 1 次提交
  19. 01 3月, 2018 1 次提交
  20. 28 2月, 2018 1 次提交
  21. 26 2月, 2018 1 次提交
  22. 21 2月, 2018 1 次提交
  23. 17 2月, 2018 2 次提交
  24. 15 2月, 2018 1 次提交
  25. 08 2月, 2018 1 次提交
  26. 06 2月, 2018 1 次提交
  27. 11 1月, 2018 1 次提交
    • J
      x86/gart: Exclude GART aperture from vmcore · 2a3e83c6
      Jiri Bohac 提交于
      On machines where the GART aperture is mapped over physical RAM
      /proc/vmcore contains the remapped range and reading it may cause hangs or
      reboots.
      
      In the past, the GART region was added into the resource map, implemented
      by commit 56dd669a ("[PATCH] Insert GART region into resource map")
      
      However, inserting the iomem_resource from the early GART code caused
      resource conflicts with some AGP drivers (bko#72201), which got avoided by
      reverting the patch in commit 707d4eef ("Revert [PATCH] Insert GART
      region into resource map"). This revert introduced the /proc/vmcore bug.
      
      The vmcore ELF header is either prepared by the kernel (when using the
      kexec_file_load syscall) or by the kexec userspace (when using the kexec_load
      syscall). Since we no longer have the GART iomem resource, the userspace
      kexec has no way of knowing which region to exclude from the ELF header.
      
      Changes from v1 of this patch:
      Instead of excluding the aperture from the ELF header, this patch
      makes /proc/vmcore return zeroes in the second kernel when attempting to
      read the aperture region. This is done by reusing the
      gart_oldmem_pfn_is_ram infrastructure originally intended to exclude XEN
      balooned memory. This works for both, the kexec_file_load and kexec_load
      syscalls.
      
      [Note that the GART region is the same in the first and second kernels:
      regardless whether the first kernel fixed up the northbridge/bios setting
      and mapped the aperture over physical memory, the second kernel finds the
      northbridge properly configured by the first kernel and the aperture
      never overlaps with e820 memory because the second kernel has a fake e820
      map created from the crashkernel memory regions. Thus, the second kernel
      keeps the aperture address/size as configured by the first kernel.]
      
      register_oldmem_pfn_is_ram can only register one callback and returns an error
      if the callback has been registered already. Since XEN used to be the only user
      of this function, it never checks the return value. Now that we have more than
      one user, I added a WARN_ON just in case agp, XEN, or any other future user of
      register_oldmem_pfn_is_ram were to step on each other's toes.
      
      Fixes: 707d4eef ("Revert [PATCH] Insert GART region into resource map")
      Signed-off-by: NJiri Bohac <jbohac@suse.cz>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Cc: David Airlie <airlied@linux.ie>
      Cc: yinghai@kernel.org
      Cc: joro@8bytes.org
      Cc: kexec@lists.infradead.org
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Link: https://lkml.kernel.org/r/20180106010013.73suskgxm7lox7g6@dwarf.suse.cz
      2a3e83c6
  28. 08 1月, 2018 2 次提交
  29. 23 12月, 2017 1 次提交
    • T
      x86/cpu_entry_area: Move it out of the fixmap · 92a0f81d
      Thomas Gleixner 提交于
      Put the cpu_entry_area into a separate P4D entry. The fixmap gets too big
      and 0-day already hit a case where the fixmap PTEs were cleared by
      cleanup_highmap().
      
      Aside of that the fixmap API is a pain as it's all backwards.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      92a0f81d
  30. 21 12月, 2017 1 次提交
  31. 20 12月, 2017 1 次提交
  32. 17 12月, 2017 2 次提交
    • A
      x86/entry/64: Make cpu_entry_area.tss read-only · c482feef
      Andy Lutomirski 提交于
      The TSS is a fairly juicy target for exploits, and, now that the TSS
      is in the cpu_entry_area, it's no longer protected by kASLR.  Make it
      read-only on x86_64.
      
      On x86_32, it can't be RO because it's written by the CPU during task
      switches, and we use a task gate for double faults.  I'd also be
      nervous about errata if we tried to make it RO even on configurations
      without double fault handling.
      
      [ tglx: AMD confirmed that there is no problem on 64-bit with TSS RO.  So
        	it's probably safe to assume that it's a non issue, though Intel
        	might have been creative in that area. Still waiting for
        	confirmation. ]
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bpetkov@suse.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150606.733700132@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c482feef
    • A
      x86/mm/fixmap: Generalize the GDT fixmap mechanism, introduce struct cpu_entry_area · ef8813ab
      Andy Lutomirski 提交于
      Currently, the GDT is an ad-hoc array of pages, one per CPU, in the
      fixmap.  Generalize it to be an array of a new 'struct cpu_entry_area'
      so that we can cleanly add new things to it.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Link: https://lkml.kernel.org/r/20171204150605.563271721@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ef8813ab
新手
引导
客服 返回
顶部