1. 10 11月, 2015 18 次提交
  2. 07 11月, 2015 6 次提交
    • M
      arch/x86/kernel/cpu/perf_event_msr.c: use sign_extend64() for sign extension · 78e3c795
      Martin Kepplinger 提交于
      Signed-off-by: NMartin Kepplinger <martin.kepplinger@theobroma-systems.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: George Spelvin <linux@horizon.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Maxime Coquelin <maxime.coquelin@st.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Yury Norov <yury.norov@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      78e3c795
    • M
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep... · d0164adc
      Mel Gorman 提交于
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd
      
      __GFP_WAIT has been used to identify atomic context in callers that hold
      spinlocks or are in interrupts.  They are expected to be high priority and
      have access one of two watermarks lower than "min" which can be referred
      to as the "atomic reserve".  __GFP_HIGH users get access to the first
      lower watermark and can be called the "high priority reserve".
      
      Over time, callers had a requirement to not block when fallback options
      were available.  Some have abused __GFP_WAIT leading to a situation where
      an optimisitic allocation with a fallback option can access atomic
      reserves.
      
      This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
      cannot sleep and have no alternative.  High priority users continue to use
      __GFP_HIGH.  __GFP_DIRECT_RECLAIM identifies callers that can sleep and
      are willing to enter direct reclaim.  __GFP_KSWAPD_RECLAIM to identify
      callers that want to wake kswapd for background reclaim.  __GFP_WAIT is
      redefined as a caller that is willing to enter direct reclaim and wake
      kswapd for background reclaim.
      
      This patch then converts a number of sites
      
      o __GFP_ATOMIC is used by callers that are high priority and have memory
        pools for those requests. GFP_ATOMIC uses this flag.
      
      o Callers that have a limited mempool to guarantee forward progress clear
        __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
        into this category where kswapd will still be woken but atomic reserves
        are not used as there is a one-entry mempool to guarantee progress.
      
      o Callers that are checking if they are non-blocking should use the
        helper gfpflags_allow_blocking() where possible. This is because
        checking for __GFP_WAIT as was done historically now can trigger false
        positives. Some exceptions like dm-crypt.c exist where the code intent
        is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
        flag manipulations.
      
      o Callers that built their own GFP flags instead of starting with GFP_KERNEL
        and friends now also need to specify __GFP_KSWAPD_RECLAIM.
      
      The first key hazard to watch out for is callers that removed __GFP_WAIT
      and was depending on access to atomic reserves for inconspicuous reasons.
      In some cases it may be appropriate for them to use __GFP_HIGH.
      
      The second key hazard is callers that assembled their own combination of
      GFP flags instead of starting with something like GFP_KERNEL.  They may
      now wish to specify __GFP_KSWAPD_RECLAIM.  It's almost certainly harmless
      if it's missed in most cases as other activity will wake kswapd.
      Signed-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Vitaly Wool <vitalywool@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d0164adc
    • R
      um: Simplify STUB_DATA loading · 1b2411c2
      Richard Weinberger 提交于
      As long STUB_DATA fits into 32bits we can use a plain mov.
      If it will grow at some point in future we will switch to movabsq.
      In any case the code is smaller and more easy to read
      than the current one
      Signed-off-by: NRichard Weinberger <richard@nod.at>
      1b2411c2
    • R
      um: Remove dead symbol from i386 syscall stub · 246d254f
      Richard Weinberger 提交于
      syscall_stub is nowhere used these days.
      Signed-off-by: NRichard Weinberger <richard@nod.at>
      246d254f
    • R
      um: Remove dead code from x86_64 syscall stub · ec2c6c01
      Richard Weinberger 提交于
      syscall_stub is dead code as um is using only
      batch_syscall_stub.
      Signed-off-by: NRichard Weinberger <richard@nod.at>
      ec2c6c01
    • L
      x86: don't make DEBUG_WX default to 'y' even with DEBUG_RODATA · 54727e6e
      Linus Torvalds 提交于
      It turns out that we still have issues with the EFI memory map that ends
      up polluting our kernel page tables with writable executable pages.
      
      That will get sorted out, but in the meantime let's not make the scary
      complaint about them be on by default.  The code is useful for
      developers, but not ready for end user testing yet.
      Acked-by: NBorislav Petkov <bp@alien8.de>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      54727e6e
  3. 06 11月, 2015 4 次提交
    • J
      livepatch: Fix crash with !CONFIG_DEBUG_SET_MODULE_RONX · e2391a2d
      Josh Poimboeuf 提交于
      When loading a patch module on a kernel with
      !CONFIG_DEBUG_SET_MODULE_RONX, the following crash occurs:
      
        [  205.988776] livepatch: enabling patch 'kpatch_meminfo_string'
        [  205.989829] BUG: unable to handle kernel paging request at ffffffffa08d2fc0
        [  205.989863] IP: [<ffffffff8154fecb>] do_init_module+0x8c/0x1ba
        [  205.989888] PGD 1a10067 PUD 1a11063 PMD 7bcde067 PTE 3740e161
        [  205.989915] Oops: 0003 [#1] SMP
        [  205.990187] CPU: 2 PID: 14570 Comm: insmod Tainted: G           O  K 4.1.12
        [  205.990214] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.8.1-20150318_183358- 04/01/2014
        [  205.990249] task: ffff8800374aaa90 ti: ffff8800794b8000 task.ti: ffff8800794b8000
        [  205.990276] RIP: 0010:[<ffffffff8154fecb>]  [<ffffffff8154fecb>] do_init_module+0x8c/0x1ba
        [  205.990307] RSP: 0018:ffff8800794bbd58  EFLAGS: 00010246
        [  205.990327] RAX: 0000000000000000 RBX: ffffffffa08d2fc0 RCX: 0000000000000000
        [  205.990356] RDX: 01ffff8000000080 RSI: 0000000000000000 RDI: ffffffff81a54b40
        [  205.990382] RBP: ffff88007b4c4d80 R08: 0000000000000007 R09: 0000000000000000
        [  205.990408] R10: 0000000000000008 R11: ffffea0001f18840 R12: 0000000000000000
        [  205.990433] R13: 0000000000000001 R14: ffffffffa08d2fc0 R15: ffff88007bd0bc40
        [  205.990459] FS:  00007f1128fbc700(0000) GS:ffff88007fc80000(0000) knlGS:0000000000000000
        [  205.990488] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        [  205.990509] CR2: ffffffffa08d2fc0 CR3: 000000002606e000 CR4: 00000000001406e0
        [  205.990536] Stack:
        [  205.990545]  ffff8800794bbec8 0000000000000001 ffffffffa08d3010 ffffffff810ecea9
        [  205.990576]  ffffffff810e8e40 000000000005f360 ffff88007bd0bc50 ffffffffa08d3240
        [  205.990608]  ffffffffa08d52c0 ffffffffa08d3210 ffff8800794bbed8 ffff8800794bbf1c
        [  205.990639] Call Trace:
        [  205.990651]  [<ffffffff810ecea9>] ? load_module+0x1e59/0x23a0
        [  205.990672]  [<ffffffff810e8e40>] ? store_uevent+0x40/0x40
        [  205.990693]  [<ffffffff810e99b5>] ? copy_module_from_fd.isra.49+0xb5/0x140
        [  205.990718]  [<ffffffff810ed5bd>] ? SyS_finit_module+0x7d/0xa0
        [  205.990741]  [<ffffffff81556832>] ? system_call_fastpath+0x16/0x75
        [  205.990763] Code: f9 00 00 00 74 23 49 c7 c0 92 e1 60 81 48 8d 53 18 89 c1 4c 89 c6 48 c7 c7 f0 85 7d 81 31 c0 e8 71 fa ff ff e8 58 0e 00 00 31 f6 <c7> 03 00 00 00 00 48 89 da 48 c7 c7 20 c7 a5 81 e8 d0 ec b3 ff
        [  205.990916] RIP  [<ffffffff8154fecb>] do_init_module+0x8c/0x1ba
        [  205.990940]  RSP <ffff8800794bbd58>
        [  205.990953] CR2: ffffffffa08d2fc0
      
      With !CONFIG_DEBUG_SET_MODULE_RONX, module text and rodata pages are
      writable, and the debug_align() macro allows the module struct to share
      a page with executable text.  When klp_write_module_reloc() calls
      set_memory_ro() on the page, it effectively turns the module struct into
      a read-only structure, resulting in a page fault when load_module() does
      "mod->state = MODULE_STATE_LIVE".
      Reported-by: NCyril B. <cbay@alwaysdata.com>
      Tested-by: NCyril B. <cbay@alwaysdata.com>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      e2391a2d
    • E
      mm: mlock: add new mlock system call · a8ca5d0e
      Eric B Munson 提交于
      With the refactored mlock code, introduce a new system call for mlock.
      The new call will allow the user to specify what lock states are being
      added.  mlock2 is trivial at the moment, but a follow on patch will add a
      new mlock state making it useful.
      Signed-off-by: NEric B Munson <emunson@akamai.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Shuah Khan <shuahkh@osg.samsung.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a8ca5d0e
    • A
      kasan: move KASAN_SANITIZE in arch/x86/boot/Makefile · c63f06dd
      Andrey Konovalov 提交于
      Move KASAN_SANITIZE in arch/x86/boot/Makefile above the comment
      related to SVGA_MODE, since the comment refers to 'the next line'.
      Signed-off-by: NAndrey Konovalov <andreyknvl@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c63f06dd
    • A
      kasan: update log messages · 25add7ec
      Andrey Konovalov 提交于
      We decided to use KASAN as the short name of the tool and
      KernelAddressSanitizer as the full one.  Update log messages according to
      that.
      Signed-off-by: NAndrey Konovalov <andreyknvl@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      25add7ec
  4. 05 11月, 2015 1 次提交
    • K
      KVM: VMX: Fix commit which broke PML · a3eaa864
      Kai Huang 提交于
      I found PML was broken since below commit:
      
      	commit feda805f
      	Author: Xiao Guangrong <guangrong.xiao@linux.intel.com>
      	Date:   Wed Sep 9 14:05:55 2015 +0800
      
      	KVM: VMX: unify SECONDARY_VM_EXEC_CONTROL update
      
      	Unify the update in vmx_cpuid_update()
      Signed-off-by: NXiao Guangrong <guangrong.xiao@linux.intel.com>
      	[Rewrite to use vmcs_set_secondary_exec_control. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      
      The reason is in above commit vmx_cpuid_update calls vmx_secondary_exec_control,
      in which currently SECONDARY_EXEC_ENABLE_PML bit is cleared unconditionally (as
      PML is enabled in creating vcpu). Therefore if vcpu_cpuid_update is called after
      vcpu is created, PML will be disabled unexpectedly while log-dirty code still
      thinks PML is used.
      
      Fix this by clearing SECONDARY_EXEC_ENABLE_PML in vmx_secondary_exec_control
      only when PML is not supported or not enabled (!enable_pml). This is more
      reasonable as PML is currently either always enabled or disabled. With this
      explicit updating SECONDARY_EXEC_ENABLE_PML in vmx_enable{disable}_pml is not
      needed so also rename vmx_enable{disable}_pml to vmx_create{destroy}_pml_buffer.
      
      Fixes: feda805fSigned-off-by: NKai Huang <kai.huang@linux.intel.com>
      [While at it, change a wrong ASSERT to an "if".  The condition can happen
       if creating the VCPU fails with ENOMEM. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      a3eaa864
  5. 04 11月, 2015 11 次提交