1. 19 2月, 2015 1 次提交
  2. 23 1月, 2015 3 次提交
    • A
      x86, tls: Interpret an all-zero struct user_desc as "no segment" · 3669ef9f
      Andy Lutomirski 提交于
      The Witcher 2 did something like this to allocate a TLS segment index:
      
              struct user_desc u_info;
              bzero(&u_info, sizeof(u_info));
              u_info.entry_number = (uint32_t)-1;
      
              syscall(SYS_set_thread_area, &u_info);
      
      Strictly speaking, this code was never correct.  It should have set
      read_exec_only and seg_not_present to 1 to indicate that it wanted
      to find a free slot without putting anything there, or it should
      have put something sensible in the TLS slot if it wanted to allocate
      a TLS entry for real.  The actual effect of this code was to
      allocate a bogus segment that could be used to exploit espfix.
      
      The set_thread_area hardening patches changed the behavior, causing
      set_thread_area to return -EINVAL and crashing the game.
      
      This changes set_thread_area to interpret this as a request to find
      a free slot and to leave it empty, which isn't *quite* what the game
      expects but should be close enough to keep it working.  In
      particular, using the code above to allocate two segments will
      allocate the same segment both times.
      
      According to FrostbittenKing on Github, this fixes The Witcher 2.
      
      If this somehow still causes problems, we could instead allocate
      a limit==0 32-bit data segment, but that seems rather ugly to me.
      
      Fixes: 41bdc785 x86/tls: Validate TLS entries to protect espfix
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Cc: stable@vger.kernel.org
      Cc: torvalds@linux-foundation.org
      Link: http://lkml.kernel.org/r/0cb251abe1ff0958b8e468a9a9a905b80ae3a746.1421954363.git.luto@amacapital.netSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      3669ef9f
    • A
      x86, tls, ldt: Stop checking lm in LDT_empty · e30ab185
      Andy Lutomirski 提交于
      32-bit programs don't have an lm bit in their ABI, so they can't
      reliably cause LDT_empty to return true without resorting to memset.
      They shouldn't need to do this.
      
      This should fix a longstanding, if minor, issue in all 64-bit kernels
      as well as a potential regression in the TLS hardening code.
      
      Fixes: 41bdc785 x86/tls: Validate TLS entries to protect espfix
      Cc: stable@vger.kernel.org
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Cc: torvalds@linux-foundation.org
      Link: http://lkml.kernel.org/r/72a059de55e86ad5e2935c80aa91880ddf19d07c.1421954363.git.luto@amacapital.netSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      e30ab185
    • D
      x86, mpx: Fix potential performance issue on unmaps · c922228e
      Dave Hansen 提交于
      The 3.19 merge window saw some TLB modifications merged which caused a
      performance regression. They were fixed in commit 045bbb9fa.
      
      Once that fix was applied, I also noticed that there was a small
      but intermittent regression still present.  It was not present
      consistently enough to bisect reliably, but I'm fairly confident
      that it came from (my own) MPX patches.  The source was reading
      a relatively unused field in the mm_struct via arch_unmap.
      
      I also noted that this code was in the main instruction flow of
      do_munmap() and probably had more icache impact than we want.
      
      This patch does two things:
      1. Adds a static (via Kconfig) and dynamic (via cpuid) check
         for MPX with cpu_feature_enabled().  This keeps us from
         reading that cacheline in the mm and trades it for a check
         of the global CPUID variables at least on CPUs without MPX.
      2. Adds an unlikely() to ensure that the MPX call ends up out
         of the main instruction flow in do_munmap().  I've added
         a detailed comment about why this was done and why we want
         it even on systems where MPX is present.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: luto@amacapital.net
      Cc: Dave Hansen <dave@sr71.net>
      Link: http://lkml.kernel.org/r/20150108223021.AEEAB987@viggo.jf.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      c922228e
  3. 20 1月, 2015 1 次提交
  4. 24 12月, 2014 1 次提交
    • A
      x86, vdso: Use asm volatile in __getcpu · 1ddf0b1b
      Andy Lutomirski 提交于
      In Linux 3.18 and below, GCC hoists the lsl instructions in the
      pvclock code all the way to the beginning of __vdso_clock_gettime,
      slowing the non-paravirt case significantly.  For unknown reasons,
      presumably related to the removal of a branch, the performance issue
      is gone as of
      
      e76b027e x86,vdso: Use LSL unconditionally for vgetcpu
      
      but I don't trust GCC enough to expect the problem to stay fixed.
      
      There should be no correctness issue, because the __getcpu calls in
      __vdso_vlock_gettime were never necessary in the first place.
      
      Note to stable maintainers: In 3.18 and below, depending on
      configuration, gcc 4.9.2 generates code like this:
      
           9c3:       44 0f 03 e8             lsl    %ax,%r13d
           9c7:       45 89 eb                mov    %r13d,%r11d
           9ca:       0f 03 d8                lsl    %ax,%ebx
      
      This patch won't apply as is to any released kernel, but I'll send a
      trivial backported version if needed.
      
      Fixes: 51c19b4f x86: vdso: pvclock gettime support
      Cc: stable@vger.kernel.org # 3.8+
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      1ddf0b1b
  5. 18 12月, 2014 2 次提交
  6. 16 12月, 2014 17 次提交
  7. 12 12月, 2014 2 次提交
    • A
      arch: Add lightweight memory barriers dma_rmb() and dma_wmb() · 1077fa36
      Alexander Duyck 提交于
      There are a number of situations where the mandatory barriers rmb() and
      wmb() are used to order memory/memory operations in the device drivers
      and those barriers are much heavier than they actually need to be.  For
      example in the case of PowerPC wmb() calls the heavy-weight sync
      instruction when for coherent memory operations all that is really needed
      is an lsync or eieio instruction.
      
      This commit adds a coherent only version of the mandatory memory barriers
      rmb() and wmb().  In most cases this should result in the barrier being the
      same as the SMP barriers for the SMP case, however in some cases we use a
      barrier that is somewhere in between rmb() and smp_rmb().  For example on
      ARM the rmb barriers break down as follows:
      
        Barrier   Call     Explanation
        --------- -------- ----------------------------------
        rmb()     dsb()    Data synchronization barrier - system
        dma_rmb() dmb(osh) data memory barrier - outer sharable
        smp_rmb() dmb(ish) data memory barrier - inner sharable
      
      These new barriers are not as safe as the standard rmb() and wmb().
      Specifically they do not guarantee ordering between coherent and incoherent
      memories.  The primary use case for these would be to enforce ordering of
      reads and writes when accessing coherent memory that is shared between the
      CPU and a device.
      
      It may also be noted that there is no dma_mb().  Most architectures don't
      provide a good mechanism for performing a coherent only full barrier without
      resorting to the same mechanism used in mb().  As such there isn't much to
      be gained in trying to define such a function.
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Michael Ellerman <michael@ellerman.id.au>
      Cc: Michael Neuling <mikey@neuling.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: David Miller <davem@davemloft.net>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1077fa36
    • A
      arch: Cleanup read_barrier_depends() and comments · 8a449718
      Alexander Duyck 提交于
      This patch is meant to cleanup the handling of read_barrier_depends and
      smp_read_barrier_depends.  In multiple spots in the kernel headers
      read_barrier_depends is defined as "do {} while (0)", however we then go
      into the SMP vs non-SMP sections and have the SMP version reference
      read_barrier_depends, and the non-SMP define it as yet another empty
      do/while.
      
      With this commit I went through and cleaned out the duplicate definitions
      and reduced the number of definitions down to 2 per header.  In addition I
      moved the 50 line comments for the macro from the x86 and mips headers that
      defined it as an empty do/while to those that were actually defining the
      macro, alpha and blackfin.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8a449718
  8. 11 12月, 2014 4 次提交
    • B
      x86/asm: Unify segment selector defines · be9d1738
      Borislav Petkov 提交于
      Those are identical on 32- and 64-bit, unify them. No functional
      change.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1418127959-29902-1-git-send-email-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      be9d1738
    • X
      x86/mm: Fix zone ranges boot printout · c072b90c
      Xishi Qiu 提交于
      This is the usual physical memory layout boot printout:
      	...
      	[    0.000000] Zone ranges:
      	[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
      	[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
      	[    0.000000]   Normal   [mem 0x100000000-0xc3fffffff]
      	[    0.000000] Movable zone start for each node
      	[    0.000000] Early memory node ranges
      	[    0.000000]   node   0: [mem 0x00001000-0x00099fff]
      	[    0.000000]   node   0: [mem 0x00100000-0xbf78ffff]
      	[    0.000000]   node   0: [mem 0x100000000-0x63fffffff]
      	[    0.000000]   node   1: [mem 0x640000000-0xc3fffffff]
      	...
      
      This is the log when we set "mem=2G" on the boot cmdline:
      	...
      	[    0.000000] Zone ranges:
      	[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
      	[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]  // should be 0x7fffffff, right?
      	[    0.000000]   Normal   empty
      	[    0.000000] Movable zone start for each node
      	[    0.000000] Early memory node ranges
      	[    0.000000]   node   0: [mem 0x00001000-0x00099fff]
      	[    0.000000]   node   0: [mem 0x00100000-0x7fffffff]
      	...
      
      This patch fixes the printout, the following log shows the right
      ranges:
      	...
      	[    0.000000] Zone ranges:
      	[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
      	[    0.000000]   DMA32    [mem 0x01000000-0x7fffffff]
      	[    0.000000]   Normal   empty
      	[    0.000000] Movable zone start for each node
      	[    0.000000] Early memory node ranges
      	[    0.000000]   node   0: [mem 0x00001000-0x00099fff]
      	[    0.000000]   node   0: [mem 0x00100000-0x7fffffff]
      	...
      Suggested-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NXishi Qiu <qiuxishi@huawei.com>
      Cc: Linux MM <linux-mm@kvack.org>
      Cc: <dave@sr71.net>
      Cc: Rik van Riel <riel@redhat.com>
      Link: http://lkml.kernel.org/r/5487AB3D.6070306@huawei.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c072b90c
    • K
      mm: fix huge zero page accounting in smaps report · c164e038
      Kirill A. Shutemov 提交于
      As a small zero page, huge zero page should not be accounted in smaps
      report as normal page.
      
      For small pages we rely on vm_normal_page() to filter out zero page, but
      vm_normal_page() is not designed to handle pmds.  We only get here due
      hackish cast pmd to pte in smaps_pte_range() -- pte and pmd format is not
      necessary compatible on each and every architecture.
      
      Let's add separate codepath to handle pmds.  follow_trans_huge_pmd() will
      detect huge zero page for us.
      
      We would need pmd_dirty() helper to do this properly.  The patch adds it
      to THP-enabled architectures which don't yet have one.
      
      [akpm@linux-foundation.org: use do_div to fix 32-bit build]
      Signed-off-by: N"Kirill A. Shutemov" <kirill@shutemov.name>
      Reported-by: NFengguang Wu <fengguang.wu@intel.com>
      Tested-by: NFengwei Yin <yfw.kernel@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c164e038
    • D
      net, lib: kill arch_fast_hash library bits · 0cb6c969
      Daniel Borkmann 提交于
      As there are now no remaining users of arch_fast_hash(), lets kill
      it entirely.
      
      This basically reverts commit 71ae8aac ("lib: introduce arch
      optimized hash library") and follow-up work, that is f.e., commit
      23721754 ("lib: hash: follow-up fixups for arch hash"),
      commit e3fec2f7 ("lib: Add missing arch generic-y entries for
      asm-generic/hash.h") and last but not least commit 6a02652d
      ("perf tools: Fix include for non x86 architectures").
      
      Cc: Francesco Fusco <fusco@ntop.org>
      Cc: Thomas Graf <tgraf@suug.ch>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0cb6c969
  9. 08 12月, 2014 2 次提交
  10. 06 12月, 2014 1 次提交
    • B
      x86, microcode: Reload microcode on resume · fbae4ba8
      Borislav Petkov 提交于
      Normally, we do reapply microcode on resume. However, in the cases where
      that microcode comes from the early loader and the late loader hasn't
      been utilized yet, there's no easy way for us to go and apply the patch
      applied during boot by the early loader.
      
      Thus, reuse the patch stashed by the early loader for the BSP.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      fbae4ba8
  11. 05 12月, 2014 3 次提交
  12. 04 12月, 2014 3 次提交
    • J
      xen: switch to linear virtual mapped sparse p2m list · 054954eb
      Juergen Gross 提交于
      At start of the day the Xen hypervisor presents a contiguous mfn list
      to a pv-domain. In order to support sparse memory this mfn list is
      accessed via a three level p2m tree built early in the boot process.
      Whenever the system needs the mfn associated with a pfn this tree is
      used to find the mfn.
      
      Instead of using a software walked tree for accessing a specific mfn
      list entry this patch is creating a virtual address area for the
      entire possible mfn list including memory holes. The holes are
      covered by mapping a pre-defined  page consisting only of "invalid
      mfn" entries. Access to a mfn entry is possible by just using the
      virtual base address of the mfn list and the pfn as index into that
      list. This speeds up the (hot) path of determining the mfn of a
      pfn.
      
      Kernel build on a Dell Latitude E6440 (2 cores, HT) in 64 bit Dom0
      showed following improvements:
      
      Elapsed time: 32:50 ->  32:35
      System:       18:07 ->  17:47
      User:        104:00 -> 103:30
      
      Tested with following configurations:
      - 64 bit dom0, 8GB RAM
      - 64 bit dom0, 128 GB RAM, PCI-area above 4 GB
      - 32 bit domU, 512 MB, 8 GB, 43 GB (more wouldn't work even without
                                          the patch)
      - 32 bit domU, ballooning up and down
      - 32 bit domU, save and restore
      - 32 bit domU with PCI passthrough
      - 64 bit domU, 8 GB, 2049 MB, 5000 MB
      - 64 bit domU, ballooning up and down
      - 64 bit domU, save and restore
      - 64 bit domU with PCI passthrough
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      054954eb
    • J
      xen: Hide get_phys_to_machine() to be able to tune common path · 0aad5689
      Juergen Gross 提交于
      Today get_phys_to_machine() is always called when the mfn for a pfn
      is to be obtained. Add a wrapper __pfn_to_mfn() as inline function
      to be able to avoid calling get_phys_to_machine() when possible as
      soon as the switch to a linear mapped p2m list has been done.
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      0aad5689
    • J
      x86: Introduce function to get pmd entry pointer · 792230c3
      Juergen Gross 提交于
      Introduces lookup_pmd_address() to get the address of the pmd entry
      related to a virtual address in the current address space. This
      function is needed for support of a virtual mapped sparse p2m list
      in xen pv domains, as we need the address of the pmd entry, not the
      one of the pte in that case.
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      792230c3