1. 09 6月, 2017 1 次提交
  2. 24 5月, 2017 1 次提交
  3. 27 4月, 2017 2 次提交
  4. 23 4月, 2017 1 次提交
    • I
      Revert "x86/mm/gup: Switch GUP to the generic get_user_page_fast() implementation" · 6dd29b3d
      Ingo Molnar 提交于
      This reverts commit 2947ba05.
      
      Dan Williams reported dax-pmem kernel warnings with the following signature:
      
         WARNING: CPU: 8 PID: 245 at lib/percpu-refcount.c:155 percpu_ref_switch_to_atomic_rcu+0x1f5/0x200
         percpu ref (dax_pmem_percpu_release [dax_pmem]) <= 0 (0) after switching to atomic
      
      ... and bisected it to this commit, which suggests possible memory corruption
      caused by the x86 fast-GUP conversion.
      
      He also pointed out:
      
       "
        This is similar to the backtrace when we were not properly handling
        pud faults and was fixed with this commit: 220ced16 "mm: fix
        get_user_pages() vs device-dax pud mappings"
      
        I've found some missing _devmap checks in the generic
        get_user_pages_fast() path, but this does not fix the regression
        [...]
       "
      
      So given that there are known bugs, and a pretty robust looking bisection
      points to this commit suggesting that are unknown bugs in the conversion
      as well, revert it for the time being - we'll re-try in v4.13.
      Reported-by: NDan Williams <dan.j.williams@intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: aneesh.kumar@linux.vnet.ibm.com
      Cc: dann.frazier@canonical.com
      Cc: dave.hansen@intel.com
      Cc: steve.capper@linaro.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      6dd29b3d
  5. 18 4月, 2017 1 次提交
    • I
      x86: Enable KASLR by default · 6807c846
      Ingo Molnar 提交于
      KASLR is mature (and important) enough to be enabled by default on x86.
      
      Also enable it by default in the defconfigs.
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: dan.j.williams@intel.com
      Cc: dave.jiang@intel.com
      Cc: dyoung@redhat.com
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      6807c846
  6. 04 4月, 2017 1 次提交
  7. 30 3月, 2017 1 次提交
  8. 28 3月, 2017 1 次提交
  9. 24 3月, 2017 1 次提交
  10. 18 3月, 2017 1 次提交
  11. 13 3月, 2017 1 次提交
    • D
      x86/mm: Introduce mmap_compat_base() for 32-bit mmap() · 1b028f78
      Dmitry Safonov 提交于
      mmap() uses a base address, from which it starts to look for a free space
      for allocation.
      
      The base address is stored in mm->mmap_base, which is calculated during
      exec(). The address depends on task's size, set rlimit for stack, ASLR
      randomization. The base depends on the task size and the number of random
      bits which are different for 64-bit and 32bit applications.
      
      Due to the fact, that the base address is fixed, its mmap() from a compat
      (32bit) syscall issued by a 64bit task will return a address which is based
      on the 64bit base address and does not fit into the 32bit address space
      (4GB). The returned pointer is truncated to 32bit, which results in an
      invalid address.
      
      To solve store a seperate compat address base plus a compat legacy address
      base in mm_struct. These bases are calculated at exec() time and can be
      used later to address the 32bit compat mmap() issued by 64 bit
      applications.
      
      As a consequence of this change 32-bit applications issuing a 64-bit
      syscall (after doing a long jump) will get a 64-bit mapping now. Before
      this change 32-bit applications always got a 32bit mapping.
      
      [ tglx: Massaged changelog and added a comment ]
      Signed-off-by: NDmitry Safonov <dsafonov@virtuozzo.com>
      Cc: 0x7f454c46@gmail.com
      Cc: linux-mm@kvack.org
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Cyrill Gorcunov <gorcunov@openvz.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Link: http://lkml.kernel.org/r/20170306141721.9188-4-dsafonov@virtuozzo.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      1b028f78
  12. 08 3月, 2017 1 次提交
    • J
      stacktrace/x86: add function for detecting reliable stack traces · af085d90
      Josh Poimboeuf 提交于
      For live patching and possibly other use cases, a stack trace is only
      useful if it can be assured that it's completely reliable.  Add a new
      save_stack_trace_tsk_reliable() function to achieve that.
      
      Note that if the target task isn't the current task, and the target task
      is allowed to run, then it could be writing the stack while the unwinder
      is reading it, resulting in possible corruption.  So the caller of
      save_stack_trace_tsk_reliable() must ensure that the task is either
      'current' or inactive.
      
      save_stack_trace_tsk_reliable() relies on the x86 unwinder's detection
      of pt_regs on the stack.  If the pt_regs are not user-mode registers
      from a syscall, then they indicate an in-kernel interrupt or exception
      (e.g. preemption or a page fault), in which case the stack is considered
      unreliable due to the nature of frame pointers.
      
      It also relies on the x86 unwinder's detection of other issues, such as:
      
      - corrupted stack data
      - stack grows the wrong way
      - stack walk doesn't reach the bottom
      - user didn't provide a large enough entries array
      
      Such issues are reported by checking unwind_error() and !unwind_done().
      
      Also add CONFIG_HAVE_RELIABLE_STACKTRACE so arch-independent code can
      determine at build time whether the function is implemented.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Reviewed-by: NMiroslav Benes <mbenes@suse.cz>
      Acked-by: Ingo Molnar <mingo@kernel.org>	# for the x86 changes
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      af085d90
  13. 25 2月, 2017 1 次提交
  14. 22 2月, 2017 1 次提交
    • D
      arch: add ARCH_HAS_SET_MEMORY config · d2852a22
      Daniel Borkmann 提交于
      Currently, there's no good way to test for the presence of
      set_memory_ro/rw/x/nx() helpers implemented by archs such as
      x86, arm, arm64 and s390.
      
      There's DEBUG_SET_MODULE_RONX and DEBUG_RODATA, however both
      don't really reflect that: set_memory_*() are also available
      even when DEBUG_SET_MODULE_RONX is turned off, and DEBUG_RODATA
      is set by parisc, but doesn't implement above functions. Thus,
      add ARCH_HAS_SET_MEMORY that is selected by mentioned archs,
      where generic code can test against this.
      
      This also allows later on to move DEBUG_SET_MODULE_RONX out of
      the arch specific Kconfig to define it only once depending on
      ARCH_HAS_SET_MEMORY.
      Suggested-by: NLaura Abbott <labbott@redhat.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d2852a22
  15. 08 2月, 2017 1 次提交
  16. 07 2月, 2017 1 次提交
  17. 27 1月, 2017 1 次提交
  18. 24 1月, 2017 1 次提交
  19. 11 1月, 2017 1 次提交
  20. 17 12月, 2016 1 次提交
  21. 30 11月, 2016 2 次提交
    • I
      sched/x86: Make CONFIG_SCHED_MC_PRIO=y easier to enable · 0a21fc12
      Ingo Molnar 提交于
      Right now CONFIG_SCHED_MC_PRIO has X86_INTEL_PSTATE as a dependency,
      which is not enabled by default and which hides the CONFIG_SCHED_MC_PRIO
      hardware-enabling feature.
      
      Select X86_INTEL_PSTATE instead, plus its dependency (CPU_FREQ), if the
      user enables CONFIG_SCHED_MC_PRIO=y.
      
      (Also align the CONFIG_SCHED_MC_PRIO Kconfig help text in standard style.)
      
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: bp@suse.de
      Cc: jolsa@redhat.com
      Cc: linux-acpi@vger.kernel.org
      Cc: linux-pm@vger.kernel.org
      Cc: rjw@rjwysocki.net
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      0a21fc12
    • T
      sched/x86: Change CONFIG_SCHED_ITMT to CONFIG_SCHED_MC_PRIO · de966cf4
      Tim Chen 提交于
      Rename CONFIG_SCHED_ITMT for Intel Turbo Boost Max Technology 3.0
      to CONFIG_SCHED_MC_PRIO.  This makes the configuration extensible
      in future to other architectures that wish to similarly establish
      CPU core priorities support in the scheduler.
      
      The description in Kconfig is updated to reflect this change with
      added details for better clarity.  The configuration is explicitly
      default-y, to enable the feature on CPUs that have this feature.
      
      It has no effect on non-TBM3 CPUs.
      Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: bp@suse.de
      Cc: jolsa@redhat.com
      Cc: linux-acpi@vger.kernel.org
      Cc: linux-pm@vger.kernel.org
      Cc: rjw@rjwysocki.net
      Link: http://lkml.kernel.org/r/2b2ee29d93e3f162922d72d0165a1405864fbb23.1480444902.git.tim.c.chen@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      de966cf4
  22. 25 11月, 2016 1 次提交
    • T
      x86: Enable Intel Turbo Boost Max Technology 3.0 · 5e76b2ab
      Tim Chen 提交于
      On platforms supporting Intel Turbo Boost Max Technology 3.0, the maximum
      turbo frequencies of some cores in a CPU package may be higher than for
      the other cores in the same package.  In that case, better performance
      (and possibly lower energy consumption as well) can be achieved by
      making the scheduler prefer to run tasks on the CPUs with higher max
      turbo frequencies.
      
      To that end, set up a core priority metric to abstract the core
      preferences based on the maximum turbo frequency.  In that metric,
      the cores with higher maximum turbo frequencies are higher-priority
      than the other cores in the same package and that causes the scheduler
      to favor them when making load-balancing decisions using the asymmertic
      packing approach.  At the same time, the priority of SMT threads with a
      higher CPU number is reduced so as to avoid scheduling tasks on all of
      the threads that belong to a favored core before all of the other cores
      have been given a task to run.
      
      The priority metric will be initialized by the P-state driver with the
      help of the sched_set_itmt_core_prio() function.  The P-state driver
      will also determine whether or not ITMT is supported by the platform
      and will call sched_set_itmt_support() to indicate that.
      Co-developed-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Co-developed-by: NSrinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
      Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com>
      Cc: linux-pm@vger.kernel.org
      Cc: peterz@infradead.org
      Cc: jolsa@redhat.com
      Cc: rjw@rjwysocki.net
      Cc: linux-acpi@vger.kernel.org
      Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
      Cc: bp@suse.de
      Link: http://lkml.kernel.org/r/cd401ccdff88f88c8349314febdc25d51f7c48f7.1479844244.git.tim.c.chen@linux.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      5e76b2ab
  23. 22 11月, 2016 1 次提交
    • Y
      x86/mce/AMD: Add system physical address translation for AMD Fam17h · f5382de9
      Yazen Ghannam 提交于
      The Unified Memory Controllers (UMCs) on Fam17h log a normalized address
      in their MCA_ADDR registers. We need to convert that normalized address
      to a system physical address in order to support a few facilities:
      
      1) To offline poisoned pages in DRAM proactively in the deferred error
         handler.
      
      2) To print sysaddr and page info for DRAM ECC errors in EDAC.
      
      [ Boris: fixes/cleanups ontop:
      
        * hi_addr_offset = 0 - no need for that branch. Stick it all under the
          HiAddrOffsetEn case. It confines hi_addr_offset's declaration too.
      
        * Move variables to the innermost scope they're used at so that we save
          on stack and not blow it up immediately on function entry.
      
        * Do not modify *sys_addr prematurely - we want to not exit early and
          have modified *sys_addr some, which callers get to see. We either
          convert to a sys_addr or we don't do anything. And we signal that with
          the retval of the function.
      
        * Rename label out -> out_err - because it is the error path.
      
        * No need to pr_err of the conversion failed case: imagine a
          sparsely-populated machine with UMCs which don't have DIMMs. Callers
          should look at the retval instead and issue a printk only when really
          necessary. No need for useless info in dmesg.
      
        * s/temp_reg/tmp/ and other variable names shortening => shorter code.
      
        * Use BIT() everywhere.
      
        * Make error messages more informative.
      
        *  Small build fix for the !CONFIG_X86_MCE_AMD case.
      
        * ... and more minor cleanups.
      ]
      Signed-off-by: NYazen Ghannam <Yazen.Ghannam@amd.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Aravind Gopalakrishnan <aravindksg.lkml@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: linux-edac <linux-edac@vger.kernel.org>
      Link: http://lkml.kernel.org/r/20161122111133.mjzpvzhf7o7yl2oa@pd.tnic
      [ Typo fixes. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      f5382de9
  24. 16 11月, 2016 1 次提交
  25. 15 11月, 2016 6 次提交
  26. 27 10月, 2016 1 次提交
    • F
      x86/intel_rdt: Add CONFIG, Makefile, and basic initialization · 78e99b4a
      Fenghua Yu 提交于
      Introduce CONFIG_INTEL_RDT_A (default: no, dependent on CPU_SUP_INTEL) to
      control inclusion of Resource Director Technology in the build.
      
      Simple init() routine just checks which features are present. If they are
      pr_info() one line summary for each feature for now.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Cc: "Ravi V Shankar" <ravi.v.shankar@intel.com>
      Cc: "Tony Luck" <tony.luck@intel.com>
      Cc: "David Carrillo-Cisneros" <davidcc@google.com>
      Cc: "Sai Prakhya" <sai.praneeth.prakhya@intel.com>
      Cc: "Peter Zijlstra" <peterz@infradead.org>
      Cc: "Stephane Eranian" <eranian@google.com>
      Cc: "Dave Hansen" <dave.hansen@intel.com>
      Cc: "Shaohua Li" <shli@fb.com>
      Cc: "Nilay Vaish" <nilayvaish@gmail.com>
      Cc: "Vikas Shivappa" <vikas.shivappa@linux.intel.com>
      Cc: "Ingo Molnar" <mingo@elte.hu>
      Cc: "Borislav Petkov" <bp@suse.de>
      Cc: "H. Peter Anvin" <h.peter.anvin@intel.com>
      Link: http://lkml.kernel.org/r/1477142405-32078-7-git-send-email-fenghua.yu@intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      78e99b4a
  27. 24 10月, 2016 1 次提交
  28. 08 10月, 2016 2 次提交
    • V
      atomic64: no need for CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE · 51a02124
      Vineet Gupta 提交于
      This came to light when implementing native 64-bit atomics for ARCv2.
      
      The atomic64 self-test code uses CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
      to check whether atomic64_dec_if_positive() is available.  It seems it
      was needed when not every arch defined it.  However as of current code
      the Kconfig option seems needless
      
       - for CONFIG_GENERIC_ATOMIC64 it is auto-enabled in lib/Kconfig and a
         generic definition of API is present lib/atomic64.c
       - arches with native 64-bit atomics select it in arch/*/Kconfig and
         define the API in their headers
      
      So I see no point in keeping the Kconfig option
      
      Compile tested for:
       - blackfin (CONFIG_GENERIC_ATOMIC64)
       - x86 (!CONFIG_GENERIC_ATOMIC64)
       - ia64
      
      Link: http://lkml.kernel.org/r/1473703083-8625-3-git-send-email-vgupta@synopsys.comSigned-off-by: NVineet Gupta <vgupta@synopsys.com>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Zhaoxiu Zeng <zhaoxiu.zeng@gmail.com>
      Cc: Linus Walleij <linus.walleij@linaro.org>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Ming Lin <ming.l@ssi.samsung.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      51a02124
    • Y
      mm/hugetlb: introduce ARCH_HAS_GIGANTIC_PAGE · 461a7184
      Yisheng Xie 提交于
      Avoid making ifdef get pretty unwieldy if many ARCHs support gigantic
      page.  No functional change with this patch.
      
      Link: http://lkml.kernel.org/r/1475227569-63446-2-git-send-email-xieyisheng1@huawei.comSigned-off-by: NYisheng Xie <xieyisheng1@huawei.com>
      Suggested-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Acked-by: NHillf Danton <hillf.zj@alibaba-inc.com>
      Cc: Hanjun Guo <guohanjun@huawei.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Sudeep Holla <sudeep.holla@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Rob Herring <robh+dt@kernel.org>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      461a7184
  29. 05 10月, 2016 1 次提交
  30. 30 9月, 2016 1 次提交
  31. 23 9月, 2016 1 次提交
  32. 15 9月, 2016 1 次提交