1. 01 2月, 2016 1 次提交
    • H
      x86/cpufeature: Use enum cpuid_leafs instead of magic numbers · 16aaa537
      Huaitong Han 提交于
      Most of the magic numbers in x86_capability[] have been converted to
      'enum cpuid_leafs', and this patch updates the remaining part.
      Signed-off-by: NHuaitong Han <huaitong.han@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Alexander Kuleshov <kuleshovmail@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Hector Marco-Gisbert <hecmargi@upv.es>
      Cc: Jiang Liu <jiang.liu@linux.intel.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Cc: lguest@lists.ozlabs.org
      Cc: xen-devel@lists.xenproject.org
      Link: http://lkml.kernel.org/r/1453750913-4781-3-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      16aaa537
  2. 29 1月, 2016 3 次提交
    • M
      x86/mm/pat: Avoid truncation when converting cpa->numpages to address · 74256377
      Matt Fleming 提交于
      There are a couple of nasty truncation bugs lurking in the pageattr
      code that can be triggered when mapping EFI regions, e.g. when we pass
      a cpa->pgd pointer. Because cpa->numpages is a 32-bit value, shifting
      left by PAGE_SHIFT will truncate the resultant address to 32-bits.
      
      Viorel-Cătălin managed to trigger this bug on his Dell machine that
      provides a ~5GB EFI region which requires 1236992 pages to be mapped.
      When calling populate_pud() the end of the region gets calculated
      incorrectly in the following buggy expression,
      
        end = start + (cpa->numpages << PAGE_SHIFT);
      
      And only 188416 pages are mapped. Next, populate_pud() gets invoked
      for a second time because of the loop in __change_page_attr_set_clr(),
      only this time no pages get mapped because shifting the remaining
      number of pages (1048576) by PAGE_SHIFT is zero. At which point the
      loop in __change_page_attr_set_clr() spins forever because we fail to
      map progress.
      
      Hitting this bug depends very much on the virtual address we pick to
      map the large region at and how many pages we map on the initial run
      through the loop. This explains why this issue was only recently hit
      with the introduction of commit
      
        a5caa209 ("x86/efi: Fix boot crash by mapping EFI memmap
         entries bottom-up at runtime, instead of top-down")
      
      It's interesting to note that safe uses of cpa->numpages do exist in
      the pageattr code. If instead of shifting ->numpages we multiply by
      PAGE_SIZE, no truncation occurs because PAGE_SIZE is a UL value, and
      so the result is unsigned long.
      
      To avoid surprises when users try to convert very large cpa->numpages
      values to addresses, change the data type from 'int' to 'unsigned
      long', thereby making it suitable for shifting by PAGE_SHIFT without
      any type casting.
      
      The alternative would be to make liberal use of casting, but that is
      far more likely to cause problems in the future when someone adds more
      code and fails to cast properly; this bug was difficult enough to
      track down in the first place.
      Reported-and-tested-by: NViorel-Cătălin Răpițeanu <rapiteanu.catalin@gmail.com>
      Acked-by: NBorislav Petkov <bp@alien8.de>
      Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=110131
      Link: http://lkml.kernel.org/r/1454067370-10374-1-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      74256377
    • P
      perf/x86: De-obfuscate code · 8f04b853
      Peter Zijlstra 提交于
      Get rid of the 'onln' obfuscation.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      8f04b853
    • P
      perf/x86: Fix uninitialized value usage · e01d8718
      Peter Zijlstra 提交于
      When calling intel_alt_er() with .idx != EXTRA_REG_RSP_* we will not
      initialize alt_idx and then use this uninitialized value to index an
      array.
      
      When that is not fatal, it can result in an infinite loop in its
      caller __intel_shared_reg_get_constraints(), with IRQs disabled.
      
      Alternative error modes are random memory corruption due to the
      cpuc->shared_regs->regs[] array overrun, which manifest in either
      get_constraints or put_constraints doing weird stuff.
      
      Only took 6 hours of painful debugging to find this. Neither GCC nor
      Smatch warnings flagged this bug.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: ae3f011f ("perf/x86/intel: Fix SLM MSR_OFFCORE_RSP1 valid_mask")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      e01d8718
  3. 28 1月, 2016 4 次提交
  4. 27 1月, 2016 3 次提交
  5. 26 1月, 2016 16 次提交
  6. 25 1月, 2016 6 次提交
    • M
      arm64: Fix an enum typo in mm/dump.c · b3122023
      Masanari Iida 提交于
      This patch fixes a typo in mm/dump.c:
      "MODUELS_END_NR" should be "MODULES_END_NR".
      Signed-off-by: NMasanari Iida <standby24x7@gmail.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b3122023
    • C
      arm64: Honour !PTE_WRITE in set_pte_at() for kernel mappings · ac15bd63
      Catalin Marinas 提交于
      Currently, set_pte_at() only checks the software PTE_WRITE bit for user
      mappings when it sets or clears the hardware PTE_RDONLY accordingly. The
      kernel ptes are written directly without any modification, relying
      solely on the protection bits in macros like PAGE_KERNEL. However,
      modifying kernel pte attributes via pte_wrprotect() would be ignored by
      set_pte_at(). Since pte_wrprotect() does not set PTE_RDONLY (it only
      clears PTE_WRITE), the new permission is not taken into account.
      
      This patch changes set_pte_at() to adjust the read-only permission for
      kernel ptes as well. As a side effect, existing PROT_* definitions used
      for kernel ioremap*() need to include PTE_DIRTY | PTE_WRITE.
      
      (additionally, white space fix for PTE_KERNEL_ROX)
      Acked-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ac15bd63
    • L
      arm64: kernel: fix architected PMU registers unconditional access · f436b2ac
      Lorenzo Pieralisi 提交于
      The Performance Monitors extension is an optional feature of the
      AArch64 architecture, therefore, in order to access Performance
      Monitors registers safely, the kernel should detect the architected
      PMU unit presence through the ID_AA64DFR0_EL1 register PMUVer field
      before accessing them.
      
      This patch implements a guard by reading the ID_AA64DFR0_EL1 register
      PMUVer field to detect the architected PMU presence and prevent accessing
      PMU system registers if the Performance Monitors extension is not
      implemented in the core.
      
      Cc: Peter Maydell <peter.maydell@linaro.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: <stable@vger.kernel.org>
      Fixes: 60792ad3 ("arm64: kernel: enforce pmuserenr_el0 initialization and restore")
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Reported-by: NGuenter Roeck <linux@roeck-us.net>
      Tested-by: NGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      f436b2ac
    • A
      arm64: kasan: ensure that the KASAN zero page is mapped read-only · 7b1af979
      Ard Biesheuvel 提交于
      When switching from the early KASAN shadow region, which maps the
      entire shadow space read-write, to the permanent KASAN shadow region,
      which uses a zero page to shadow regions that are not subject to
      instrumentation, the lowest level table kasan_zero_pte[] may be
      reused unmodified, which means that the mappings of the zero page
      that it contains will still be read-write.
      
      So update it explicitly to map the zero page read only when we
      activate the permanent mapping.
      Acked-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7b1af979
    • A
      arm64: hide __efistub_ aliases from kallsyms · 75feee3d
      Ard Biesheuvel 提交于
      Commit e8f3010f ("arm64/efi: isolate EFI stub from the kernel
      proper") isolated the EFI stub code from the kernel proper by prefixing
      all of its symbols with __efistub_, and selectively allowing access to
      core kernel symbols from the stub by emitting __efistub_ aliases for
      functions and variables that the stub can access legally.
      
      As an unintended side effect, these aliases are emitted into the
      kallsyms symbol table, which means they may turn up in backtraces,
      e.g.,
      
        ...
        PC is at __efistub_memset+0x108/0x200
        LR is at fixup_init+0x3c/0x48
        ...
        [<ffffff8008328608>] __efistub_memset+0x108/0x200
        [<ffffff8008094dcc>] free_initmem+0x2c/0x40
        [<ffffff8008645198>] kernel_init+0x20/0xe0
        [<ffffff8008085cd0>] ret_from_fork+0x10/0x40
      
      The backtrace in question has nothing to do with the EFI stub, but
      simply returns one of the several aliases of memset() that have been
      recorded in the kallsyms table. This is undesirable, since it may
      suggest to people who are not aware of this that the issue they are
      seeing is somehow EFI related.
      
      So hide the __efistub_ aliases from kallsyms, by emitting them as
      absolute linker symbols explicitly. The distinction between those
      and section relative symbols is completely irrelevant to these
      definitions, and to the final link we are performing when these
      definitions are being taken into account (the distinction is only
      relevant to symbols defined inside a section definition when performing
      a partial link), and so the resulting values are identical to the
      original ones. Since absolute symbols are ignored by kallsyms, this
      will result in these values to be omitted from its symbol table.
      
      After this patch, the backtrace generated from the same address looks
      like this:
        ...
        PC is at __memset+0x108/0x200
        LR is at fixup_init+0x3c/0x48
        ...
        [<ffffff8008328608>] __memset+0x108/0x200
        [<ffffff8008094dcc>] free_initmem+0x2c/0x40
        [<ffffff8008645198>] kernel_init+0x20/0xe0
        [<ffffff8008085cd0>] ret_from_fork+0x10/0x40
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      75feee3d
    • V
      powerpc/mm: Allow user space to map rtas_rmo_buf · e256caa7
      Vasant Hegde 提交于
      With commit 90a545e9 (restrict /dev/mem to idle io memory ranges) mapping
      rtas_rmo_buf from user space is failing. Hence we are not able to make
      RTAS syscall.
      
      This patch calls page_is_rtas_user_buf before calling iomem_is_exclusive
      in  devmem_is_allowed(). This will allow user space to map rtas_rmo_buf
      and we are able to make RTAS syscall.
      Reported-by: NBharata B Rao <bharata@linux.vnet.ibm.com>
      CC: Nathan Fontenot <nfont@linux.vnet.ibm.com>
      Signed-off-by: NVasant Hegde <hegdevasant@linux.vnet.ibm.com>
      Acked-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      e256caa7
  7. 24 1月, 2016 7 次提交