1. 09 8月, 2014 2 次提交
  2. 26 7月, 2014 1 次提交
  3. 24 7月, 2014 2 次提交
    • T
      clocksource: Move cycle_last validation to core code · 09ec5442
      Thomas Gleixner 提交于
      The only user of the cycle_last validation is the x86 TSC. In order to
      provide NMI safe accessor functions for clock monotonic and
      monotonic_raw we need to do that in the core.
      
      We can't do the TSC specific
      
          if (now < cycle_last)
             	    now = cycle_last;
      
      for the other wrapping around clocksources, but TSC has
      CLOCKSOURCE_MASK(64) which actually does not mask out anything so if
      now is less than cycle_last the subtraction will give a negative
      result. So we can check for that in clocksource_delta() and return 0
      for that case.
      
      Implement and enable it for x86
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      09ec5442
    • J
      ktime: Kill non-scalar ktime_t implementation for 2038 · 24e4a8c3
      John Stultz 提交于
      The non-scalar ktime_t implementation is basically a timespec
      which has to be changed to support dates past 2038 on 32bit
      systems.
      
      This patch removes the non-scalar ktime_t implementation, forcing
      the scalar s64 nanosecond version on all architectures.
      
      This may have additional performance overhead on some 32bit
      systems when converting between ktime_t and timespec structures,
      however the majority of 32bit systems (arm and i386) were already
      using scalar ktime_t, so no performance regressions will be seen
      on those platforms.
      
      On affected platforms, I'm open to finding optimizations, including
      avoiding converting to timespecs where possible.
      
      [ tglx: We can now cleanup the ktime_t.tv64 mess, but thats a
        different issue and we can throw a coccinelle script at it ]
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      24e4a8c3
  4. 23 7月, 2014 1 次提交
  5. 21 7月, 2014 2 次提交
  6. 19 7月, 2014 1 次提交
  7. 16 7月, 2014 1 次提交
    • P
      locking/mutex: Disable optimistic spinning on some architectures · 4badad35
      Peter Zijlstra 提交于
      The optimistic spin code assumes regular stores and cmpxchg() play nice;
      this is found to not be true for at least: parisc, sparc32, tile32,
      metag-lock1, arc-!llsc and hexagon.
      
      There is further wreckage, but this in particular seemed easy to
      trigger, so blacklist this.
      
      Opt in for known good archs.
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Reported-by: NMikulas Patocka <mpatocka@redhat.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: James Bottomley <James.Bottomley@hansenpartnership.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Jason Low <jason.low2@hp.com>
      Cc: Waiman Long <waiman.long@hp.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: John David Anglin <dave.anglin@bell.net>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Davidlohr Bueso <davidlohr@hp.com>
      Cc: stable@vger.kernel.org
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-kernel@vger.kernel.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: sparclinux@vger.kernel.org
      Link: http://lkml.kernel.org/r/20140606175316.GV13930@laptop.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4badad35
  8. 08 7月, 2014 1 次提交
  9. 17 6月, 2014 1 次提交
  10. 06 6月, 2014 1 次提交
  11. 05 6月, 2014 4 次提交
    • C
      mm: x86 pgtable: require X86_64 for soft-dirty tracker · 2bf01f9f
      Cyrill Gorcunov 提交于
      Tracking dirty status on 2 level pages requires very ugly macros and
      taking into account how old the machines who can operate without PAE
      mode only are, lets drop soft dirty tracker from them for code
      simplicity (note I can't drop all the macros from 2 level pages by now
      since _PAGE_BIT_PROTNONE and _PAGE_BIT_FILE are still used even without
      tracker).
      
      Linus proposed to completely rip off softdirty support on x86-32 (even
      with PAE) and since for CRIU we're not planning to support native x86-32
      mode, lets do that.
      
      (Softdirty tracker is relatively new feature which is mostly used by
      CRIU so I don't expect if such API change would cause problems for
      userspace).
      Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Peter Anvin <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Steven Noonan <steven@uplinklabs.net>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2bf01f9f
    • A
      x86: enable DMA CMA with swiotlb · 9c5a3621
      Akinobu Mita 提交于
      The DMA Contiguous Memory Allocator support on x86 is disabled when
      swiotlb config option is enabled.  So DMA CMA is always disabled on
      x86_64 because swiotlb is always enabled.  This attempts to support for
      DMA CMA with enabling swiotlb config option.
      
      The contiguous memory allocator on x86 is integrated in the function
      dma_generic_alloc_coherent() which is .alloc callback in nommu_dma_ops
      for dma_alloc_coherent().
      
      x86_swiotlb_alloc_coherent() which is .alloc callback in swiotlb_dma_ops
      tries to allocate with dma_generic_alloc_coherent() firstly and then
      swiotlb_alloc_coherent() is called as a fallback.
      
      The main part of supporting DMA CMA with swiotlb is that changing
      x86_swiotlb_free_coherent() which is .free callback in swiotlb_dma_ops
      for dma_free_coherent() so that it can distinguish memory allocated by
      dma_generic_alloc_coherent() from one allocated by
      swiotlb_alloc_coherent() and release it with dma_generic_free_coherent()
      which can handle contiguous memory.  This change requires making
      is_swiotlb_buffer() global function.
      
      This also needs to change .free callback in the dma_map_ops for amd_gart
      and sta2x11, because these dma_ops are also using
      dma_generic_alloc_coherent().
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com>
      Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Don Dutile <ddutile@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9c5a3621
    • M
      x86: require x86-64 for automatic NUMA balancing · 4468dd76
      Mel Gorman 提交于
      32-bit support for NUMA is an oddity on its own but with automatic NUMA
      balancing on top there is a reasonable risk that the CPUPID information
      cannot be stored in the page flags.  This patch removes support for
      automatic NUMA support on 32-bit x86.
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Peter Anvin <hpa@zytor.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Steven Noonan <steven@uplinklabs.net>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4468dd76
    • N
      hugetlb: restrict hugepage_migration_support() to x86_64 · c177c81e
      Naoya Horiguchi 提交于
      Currently hugepage migration is available for all archs which support
      pmd-level hugepage, but testing is done only for x86_64 and there're
      bugs for other archs.  So to avoid breaking such archs, this patch
      limits the availability strictly to x86_64 until developers of other
      archs get interested in enabling this feature.
      
      Simply disabling hugepage migration on non-x86_64 archs is not enough to
      fix the reported problem where sys_move_pages() hits the BUG_ON() in
      follow_page(FOLL_GET), so let's fix this by checking if hugepage
      migration is supported in vma_migratable().
      Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Reported-by: NMichael Ellerman <mpe@ellerman.id.au>
      Tested-by: NMichael Ellerman <mpe@ellerman.id.au>
      Acked-by: NHugh Dickins <hughd@google.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: <stable@vger.kernel.org>	[3.12+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c177c81e
  12. 16 5月, 2014 2 次提交
  13. 10 5月, 2014 1 次提交
  14. 05 5月, 2014 2 次提交
  15. 25 4月, 2014 1 次提交
  16. 08 4月, 2014 1 次提交
  17. 20 3月, 2014 1 次提交
    • A
      audit: Add CONFIG_HAVE_ARCH_AUDITSYSCALL · 7a017721
      AKASHI Takahiro 提交于
      Currently AUDITSYSCALL has a long list of architecture depencency:
             depends on AUDIT && (X86 || PARISC || PPC || S390 || IA64 || UML ||
      		SPARC64 || SUPERH || (ARM && AEABI && !OABI_COMPAT) || ALPHA)
      The purpose of this patch is to replace it with HAVE_ARCH_AUDITSYSCALL
      for simplicity.
      Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org>
      Acked-by: Will Deacon <will.deacon@arm.com> (arm)
      Acked-by: Richard Guy Briggs <rgb@redhat.com> (audit)
      Acked-by: Matt Turner <mattst88@gmail.com> (alpha)
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Signed-off-by: NEric Paris <eparis@redhat.com>
      7a017721
  18. 19 3月, 2014 1 次提交
  19. 14 3月, 2014 1 次提交
  20. 05 3月, 2014 1 次提交
    • M
      x86/efi: Wire up CONFIG_EFI_MIXED · 7d453eee
      Matt Fleming 提交于
      Add the Kconfig option and bump the kernel header version so that boot
      loaders can check whether the handover code is available if they want.
      
      The xloadflags field in the bzImage header is also updated to reflect
      that the kernel supports both entry points by setting both of
      XLF_EFI_HANDOVER_32 and XLF_EFI_HANDOVER_64 when CONFIG_EFI_MIXED=y.
      XLF_CAN_BE_LOADED_ABOVE_4G is disabled so that the kernel text is
      guaranteed to be addressable with 32-bits.
      
      Note that no boot loaders should be using the bits set in xloadflags to
      decide which entry point to jump to. The entire scheme is based on the
      concept that 32-bit bootloaders always jump to ->handover_offset and
      64-bit loaders always jump to ->handover_offset + 512. We set both bits
      merely to inform the boot loader that it's safe to use the native
      handover offset even if the machine type in the PE/COFF header claims
      otherwise.
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      7d453eee
  21. 28 2月, 2014 2 次提交
  22. 19 2月, 2014 1 次提交
  23. 12 2月, 2014 2 次提交
  24. 25 1月, 2014 1 次提交
  25. 24 1月, 2014 2 次提交
  26. 21 1月, 2014 1 次提交
  27. 16 1月, 2014 1 次提交
  28. 15 1月, 2014 1 次提交
  29. 14 1月, 2014 1 次提交