1. 08 9月, 2020 23 次提交
  2. 04 9月, 2020 5 次提交
  3. 03 9月, 2020 4 次提交
    • T
      MIPS: SNI: Fix SCSI interrupt · baf5cb30
      Thomas Bogendoerfer 提交于
      On RM400(a20r) machines ISA and SCSI interrupts share the same interrupt
      line. Commit 49e6e07e ("MIPS: pass non-NULL dev_id on shared
      request_irq()") accidently dropped the IRQF_SHARED bit, which breaks
      registering SCSI interrupt. Put back IRQF_SHARED and add dev_id for
      ISA interrupt.
      
      Fixes: 49e6e07e ("MIPS: pass non-NULL dev_id on shared request_irq()")
      Signed-off-by: NThomas Bogendoerfer <tsbogend@alpha.franken.de>
      baf5cb30
    • H
      MIPS: add missing MSACSR and upper MSA initialization · bb067482
      Huang Pei 提交于
      In cc97ab23 ("MIPS: Simplify FP context initialization), init_fp_ctx
      just initialize the fp/msa context, and own_fp_inatomic just restore
      FCSR and 64bit FP regs from it, but miss MSACSR and upper MSA regs for
      MSA, so MSACSR and MSA upper regs's value from previous task on current
      cpu can leak into current task and cause unpredictable behavior when MSA
      context not initialized.
      
      Fixes: cc97ab23 ("MIPS: Simplify FP context initialization")
      Signed-off-by: NHuang Pei <huangpei@loongson.cn>
      Signed-off-by: NThomas Bogendoerfer <tsbogend@alpha.franken.de>
      bb067482
    • J
      x86/mm/32: Bring back vmalloc faulting on x86_32 · 4819e15f
      Joerg Roedel 提交于
      One can not simply remove vmalloc faulting on x86-32. Upstream
      
      	commit: 7f0a002b ("x86/mm: remove vmalloc faulting")
      
      removed it on x86 alltogether because previously the
      arch_sync_kernel_mappings() interface was introduced. This interface
      added synchronization of vmalloc/ioremap page-table updates to all
      page-tables in the system at creation time and was thought to make
      vmalloc faulting obsolete.
      
      But that assumption was incredibly naive.
      
      It turned out that there is a race window between the time the vmalloc
      or ioremap code establishes a mapping and the time it synchronizes
      this change to other page-tables in the system.
      
      During this race window another CPU or thread can establish a vmalloc
      mapping which uses the same intermediate page-table entries (e.g. PMD
      or PUD) and does no synchronization in the end, because it found all
      necessary mappings already present in the kernel reference page-table.
      
      But when these intermediate page-table entries are not yet
      synchronized, the other CPU or thread will continue with a vmalloc
      address that is not yet mapped in the page-table it currently uses,
      causing an unhandled page fault and oops like below:
      
      	BUG: unable to handle page fault for address: fe80c000
      	#PF: supervisor write access in kernel mode
      	#PF: error_code(0x0002) - not-present page
      	*pde = 33183067 *pte = a8648163
      	Oops: 0002 [#1] SMP
      	CPU: 1 PID: 13514 Comm: cve-2017-17053 Tainted: G
      	...
      	Call Trace:
      	 ldt_dup_context+0x66/0x80
      	 dup_mm+0x2b3/0x480
      	 copy_process+0x133b/0x15c0
      	 _do_fork+0x94/0x3e0
      	 __ia32_sys_clone+0x67/0x80
      	 __do_fast_syscall_32+0x3f/0x70
      	 do_fast_syscall_32+0x29/0x60
      	 do_SYSENTER_32+0x15/0x20
      	 entry_SYSENTER_32+0x9f/0xf2
      	EIP: 0xb7eef549
      
      So the arch_sync_kernel_mappings() interface is racy, but removing it
      would mean to re-introduce the vmalloc_sync_all() interface, which is
      even more awful. Keep arch_sync_kernel_mappings() in place and catch
      the race condition in the page-fault handler instead.
      
      Do a partial revert of above commit to get vmalloc faulting on x86-32
      back in place.
      
      Fixes: 7f0a002b ("x86/mm: remove vmalloc faulting")
      Reported-by: NNaresh Kamboju <naresh.kamboju@linaro.org>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Link: https://lore.kernel.org/r/20200902155904.17544-1-joro@8bytes.org
      4819e15f
    • A
      x86/cmdline: Disable jump tables for cmdline.c · aef0148f
      Arvind Sankar 提交于
      When CONFIG_RETPOLINE is disabled, Clang uses a jump table for the
      switch statement in cmdline_find_option (jump tables are disabled when
      CONFIG_RETPOLINE is enabled). This function is called very early in boot
      from sme_enable() if CONFIG_AMD_MEM_ENCRYPT is enabled. At this time,
      the kernel is still executing out of the identity mapping, but the jump
      table will contain virtual addresses.
      
      Fix this by disabling jump tables for cmdline.c when AMD_MEM_ENCRYPT is
      enabled.
      Signed-off-by: NArvind Sankar <nivedita@alum.mit.edu>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Link: https://lore.kernel.org/r/20200903023056.3914690-1-nivedita@alum.mit.edu
      aef0148f
  4. 02 9月, 2020 7 次提交
  5. 01 9月, 2020 1 次提交