1. 04 9月, 2019 1 次提交
    • C
      dma-mapping: remove CONFIG_ARCH_NO_COHERENT_DMA_MMAP · 62fcee9a
      Christoph Hellwig 提交于
      CONFIG_ARCH_NO_COHERENT_DMA_MMAP is now functionally identical to
      !CONFIG_MMU, so remove the separate symbol.  The only difference is that
      arm did not set it for !CONFIG_MMU, but arm uses a separate dma mapping
      implementation including its own mmap method, which is handled by moving
      the CONFIG_MMU check in dma_can_mmap so that is only applies to the
      dma-direct case, just as the other ifdefs for it.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>	# m68k
      62fcee9a
  2. 24 6月, 2019 1 次提交
  3. 21 5月, 2019 1 次提交
  4. 03 4月, 2019 2 次提交
    • W
      locking/rwsem: Remove rwsem-spinlock.c & use rwsem-xadd.c for all archs · 390a0c62
      Waiman Long 提交于
      Currently, we have two different implementation of rwsem:
      
       1) CONFIG_RWSEM_GENERIC_SPINLOCK (rwsem-spinlock.c)
       2) CONFIG_RWSEM_XCHGADD_ALGORITHM (rwsem-xadd.c)
      
      As we are going to use a single generic implementation for rwsem-xadd.c
      and no architecture-specific code will be needed, there is no point
      in keeping two different implementations of rwsem. In most cases, the
      performance of rwsem-spinlock.c will be worse. It also doesn't get all
      the performance tuning and optimizations that had been implemented in
      rwsem-xadd.c over the years.
      
      For simplication, we are going to remove rwsem-spinlock.c and make all
      architectures use a single implementation of rwsem - rwsem-xadd.c.
      
      All references to RWSEM_GENERIC_SPINLOCK and RWSEM_XCHGADD_ALGORITHM
      in the code are removed.
      Suggested-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NWaiman Long <longman@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-c6x-dev@linux-c6x.org
      Cc: linux-m68k@lists.linux-m68k.org
      Cc: linux-riscv@lists.infradead.org
      Cc: linux-um@lists.infradead.org
      Cc: linux-xtensa@linux-xtensa.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: nios2-dev@lists.rocketboards.org
      Cc: openrisc@lists.librecores.org
      Cc: uclinux-h8-devel@lists.sourceforge.jp
      Link: https://lkml.kernel.org/r/20190322143008.21313-3-longman@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      390a0c62
    • P
      arch/tlb: Clean up simple architectures · 6137fed0
      Peter Zijlstra 提交于
      For the architectures that do not implement their own tlb_flush() but
      do already use the generic mmu_gather, there are two options:
      
       1) the platform has an efficient flush_tlb_range() and
          asm-generic/tlb.h doesn't need any overrides at all.
      
       2) the platform lacks an efficient flush_tlb_range() and
          we select MMU_GATHER_NO_RANGE to minimize full invalidates.
      
      Convert all 'simple' architectures to one of these two forms.
      
      alpha:	    has no range invalidate -> 2
      arc:	    already used flush_tlb_range() -> 1
      c6x:	    has no range invalidate -> 2
      hexagon:    has an efficient flush_tlb_range() -> 1
                  (flush_tlb_mm() is in fact a full range invalidate,
      	     so no need to shoot down everything)
      m68k:	    has inefficient flush_tlb_range() -> 2
      microblaze: has no flush_tlb_range() -> 2
      mips:	    has efficient flush_tlb_range() -> 1
      	    (even though it currently seems to use flush_tlb_mm())
      nds32:	    already uses flush_tlb_range() -> 1
      nios2:	    has inefficient flush_tlb_range() -> 2
      	    (no limit on range iteration)
      openrisc:   has inefficient flush_tlb_range() -> 2
      	    (no limit on range iteration)
      parisc:	    already uses flush_tlb_range() -> 1
      sparc32:    already uses flush_tlb_range() -> 1
      unicore32:  has inefficient flush_tlb_range() -> 2
      	    (no limit on range iteration)
      xtensa:	    has efficient flush_tlb_range() -> 1
      
      Note this also fixes a bug in the existing code for a number
      platforms. Those platforms that did:
      
        tlb_end_vma() -> if (!full_mm) flush_tlb_*()
        tlb_flush -> if (full_mm) flush_tlb_mm()
      
      missed the case of shift_arg_pages(), which doesn't have @fullmm set,
      nor calls into tlb_*vma(), but still frees page-tables and thus needs
      an invalidate. The new code handles this by detecting a non-empty
      range, and either issuing the matching range invalidate or a full
      invalidate, depending on the capabilities.
      
      No change in behavior intended.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nick Piggin <npiggin@gmail.com>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      6137fed0
  5. 19 2月, 2019 1 次提交
    • Y
      32-bit userspace ABI: introduce ARCH_32BIT_OFF_T config option · 942fa985
      Yury Norov 提交于
      All new 32-bit architectures should have 64-bit userspace off_t type, but
      existing architectures has 32-bit ones.
      
      To enforce the rule, new config option is added to arch/Kconfig that defaults
      ARCH_32BIT_OFF_T to be disabled for new 32-bit architectures. All existing
      32-bit architectures enable it explicitly.
      
      New option affects force_o_largefile() behaviour. Namely, if userspace
      off_t is 64-bits long, we have no reason to reject user to open big files.
      
      Note that even if architectures has only 64-bit off_t in the kernel
      (arc, c6x, h8300, hexagon, nios2, openrisc, and unicore32),
      a libc may use 32-bit off_t, and therefore want to limit the file size
      to 4GB unless specified differently in the open flags.
      Signed-off-by: NYury Norov <ynorov@caviumnetworks.com>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NYury Norov <ynorov@marvell.com>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      942fa985
  6. 14 12月, 2018 1 次提交
  7. 23 11月, 2018 3 次提交
  8. 31 10月, 2018 2 次提交
  9. 20 9月, 2018 2 次提交
    • C
      dma-mapping: consolidate the dma mmap implementations · 58b04406
      Christoph Hellwig 提交于
      The only functional differences (modulo a few missing fixes in the arch
      code) is that architectures without coherent caches need a hook to
      convert a virtual or dma address into a pfn, given that we don't have
      the kernel linear mapping available for the otherwise easy virt_to_page
      call.  As a side effect we can support mmap of the per-device coherent
      area even on architectures not providing the callback, and we make
      previous dangerous default methods dma_common_mmap actually save for
      non-coherent architectures by rejecting it without the right helper.
      
      In addition to that we need a hook so that some architectures can
      override the protection bits when mmaping a dma coherent allocations.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: Paul Burton <paul.burton@mips.com> # MIPS parts
      58b04406
    • C
      dma-mapping: merge direct and noncoherent ops · bc3ec75d
      Christoph Hellwig 提交于
      All the cache maintainance is already stubbed out when not enabled,
      but merging the two allows us to nicely handle the case where
      cache maintainance is required for some devices, but not others.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: Paul Burton <paul.burton@mips.com> # MIPS parts
      bc3ec75d
  10. 02 8月, 2018 4 次提交
  11. 20 7月, 2018 1 次提交
  12. 08 5月, 2018 1 次提交
  13. 16 3月, 2018 1 次提交
  14. 25 9月, 2017 1 次提交
  15. 09 9月, 2017 1 次提交
  16. 01 7月, 2017 1 次提交
    • V
      drivers: dma-mapping: allow dma_common_mmap() for NOMMU · 07c75d7a
      Vladimir Murzin 提交于
      Currently, internals of dma_common_mmap() is compiled out if build is
      done for either NOMMU or target which explicitly says it does not
      have/want coherent DMA mmap. It turned out that dma_common_mmap() can
      be handy in NOMMU setup (at least for ARM).
      
      This patch converts exitent NOMMU targets to use ARCH_NO_COHERENT_DMA_MMAP,
      thus when CONFIG_MMU is gone from dma_common_mmap() their behaviour stays
      unchanged.
      
      ARM is not converted to ARCH_NO_COHERENT_DMA_MMAP because it 1)
      already has mmap callback which can handle (at some extent) NOMMU 2)
      already defines dummy pgprot_noncached() for NOMMU build.
      
      c6x and frv stay untouched since they already have ARCH_NO_COHERENT_DMA_MMAP.
      
      Cc: Steven Miao <realmz6@gmail.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Suggested-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Tested-by: NBenjamin Gaignard <benjamin.gaignard@linaro.org>
      07c75d7a
  17. 14 6月, 2017 1 次提交
  18. 27 4月, 2017 1 次提交
  19. 11 4月, 2017 1 次提交
  20. 29 11月, 2016 1 次提交
  21. 08 6月, 2016 1 次提交
  22. 29 5月, 2016 1 次提交
    • G
      microblaze: Add <asm/hash.h> · 7b13277b
      George Spelvin 提交于
      Microblaze is an FPGA soft core that can be configured various ways.
      
      If it is configured without a multiplier, the standard __hash_32()
      will require a call to __mulsi3, which is a slow software loop.
      
      Instead, use a shift-and-add sequence for the constant multiply.
      GCC knows how to do this, but it's not as clever as some.
      Signed-off-by: NGeorge Spelvin <linux@sciencehorizons.net>
      Cc: Alistair Francis <alistair.francis@xilinx.com>
      Cc: Michal Simek <michal.simek@xilinx.com>
      7b13277b
  23. 21 5月, 2016 1 次提交
    • Z
      lib/GCD.c: use binary GCD algorithm instead of Euclidean · fff7fb0b
      Zhaoxiu Zeng 提交于
      The binary GCD algorithm is based on the following facts:
      	1. If a and b are all evens, then gcd(a,b) = 2 * gcd(a/2, b/2)
      	2. If a is even and b is odd, then gcd(a,b) = gcd(a/2, b)
      	3. If a and b are all odds, then gcd(a,b) = gcd((a-b)/2, b) = gcd((a+b)/2, b)
      
      Even on x86 machines with reasonable division hardware, the binary
      algorithm runs about 25% faster (80% the execution time) than the
      division-based Euclidian algorithm.
      
      On platforms like Alpha and ARMv6 where division is a function call to
      emulation code, it's even more significant.
      
      There are two variants of the code here, depending on whether a fast
      __ffs (find least significant set bit) instruction is available.  This
      allows the unpredictable branches in the bit-at-a-time shifting loop to
      be eliminated.
      
      If fast __ffs is not available, the "even/odd" GCD variant is used.
      
      I use the following code to benchmark:
      
      	#include <stdio.h>
      	#include <stdlib.h>
      	#include <stdint.h>
      	#include <string.h>
      	#include <time.h>
      	#include <unistd.h>
      
      	#define swap(a, b) \
      		do { \
      			a ^= b; \
      			b ^= a; \
      			a ^= b; \
      		} while (0)
      
      	unsigned long gcd0(unsigned long a, unsigned long b)
      	{
      		unsigned long r;
      
      		if (a < b) {
      			swap(a, b);
      		}
      
      		if (b == 0)
      			return a;
      
      		while ((r = a % b) != 0) {
      			a = b;
      			b = r;
      		}
      
      		return b;
      	}
      
      	unsigned long gcd1(unsigned long a, unsigned long b)
      	{
      		unsigned long r = a | b;
      
      		if (!a || !b)
      			return r;
      
      		b >>= __builtin_ctzl(b);
      
      		for (;;) {
      			a >>= __builtin_ctzl(a);
      			if (a == b)
      				return a << __builtin_ctzl(r);
      
      			if (a < b)
      				swap(a, b);
      			a -= b;
      		}
      	}
      
      	unsigned long gcd2(unsigned long a, unsigned long b)
      	{
      		unsigned long r = a | b;
      
      		if (!a || !b)
      			return r;
      
      		r &= -r;
      
      		while (!(b & r))
      			b >>= 1;
      
      		for (;;) {
      			while (!(a & r))
      				a >>= 1;
      			if (a == b)
      				return a;
      
      			if (a < b)
      				swap(a, b);
      			a -= b;
      			a >>= 1;
      			if (a & r)
      				a += b;
      			a >>= 1;
      		}
      	}
      
      	unsigned long gcd3(unsigned long a, unsigned long b)
      	{
      		unsigned long r = a | b;
      
      		if (!a || !b)
      			return r;
      
      		b >>= __builtin_ctzl(b);
      		if (b == 1)
      			return r & -r;
      
      		for (;;) {
      			a >>= __builtin_ctzl(a);
      			if (a == 1)
      				return r & -r;
      			if (a == b)
      				return a << __builtin_ctzl(r);
      
      			if (a < b)
      				swap(a, b);
      			a -= b;
      		}
      	}
      
      	unsigned long gcd4(unsigned long a, unsigned long b)
      	{
      		unsigned long r = a | b;
      
      		if (!a || !b)
      			return r;
      
      		r &= -r;
      
      		while (!(b & r))
      			b >>= 1;
      		if (b == r)
      			return r;
      
      		for (;;) {
      			while (!(a & r))
      				a >>= 1;
      			if (a == r)
      				return r;
      			if (a == b)
      				return a;
      
      			if (a < b)
      				swap(a, b);
      			a -= b;
      			a >>= 1;
      			if (a & r)
      				a += b;
      			a >>= 1;
      		}
      	}
      
      	static unsigned long (*gcd_func[])(unsigned long a, unsigned long b) = {
      		gcd0, gcd1, gcd2, gcd3, gcd4,
      	};
      
      	#define TEST_ENTRIES (sizeof(gcd_func) / sizeof(gcd_func[0]))
      
      	#if defined(__x86_64__)
      
      	#define rdtscll(val) do { \
      		unsigned long __a,__d; \
      		__asm__ __volatile__("rdtsc" : "=a" (__a), "=d" (__d)); \
      		(val) = ((unsigned long long)__a) | (((unsigned long long)__d)<<32); \
      	} while(0)
      
      	static unsigned long long benchmark_gcd_func(unsigned long (*gcd)(unsigned long, unsigned long),
      								unsigned long a, unsigned long b, unsigned long *res)
      	{
      		unsigned long long start, end;
      		unsigned long long ret;
      		unsigned long gcd_res;
      
      		rdtscll(start);
      		gcd_res = gcd(a, b);
      		rdtscll(end);
      
      		if (end >= start)
      			ret = end - start;
      		else
      			ret = ~0ULL - start + 1 + end;
      
      		*res = gcd_res;
      		return ret;
      	}
      
      	#else
      
      	static inline struct timespec read_time(void)
      	{
      		struct timespec time;
      		clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time);
      		return time;
      	}
      
      	static inline unsigned long long diff_time(struct timespec start, struct timespec end)
      	{
      		struct timespec temp;
      
      		if ((end.tv_nsec - start.tv_nsec) < 0) {
      			temp.tv_sec = end.tv_sec - start.tv_sec - 1;
      			temp.tv_nsec = 1000000000ULL + end.tv_nsec - start.tv_nsec;
      		} else {
      			temp.tv_sec = end.tv_sec - start.tv_sec;
      			temp.tv_nsec = end.tv_nsec - start.tv_nsec;
      		}
      
      		return temp.tv_sec * 1000000000ULL + temp.tv_nsec;
      	}
      
      	static unsigned long long benchmark_gcd_func(unsigned long (*gcd)(unsigned long, unsigned long),
      								unsigned long a, unsigned long b, unsigned long *res)
      	{
      		struct timespec start, end;
      		unsigned long gcd_res;
      
      		start = read_time();
      		gcd_res = gcd(a, b);
      		end = read_time();
      
      		*res = gcd_res;
      		return diff_time(start, end);
      	}
      
      	#endif
      
      	static inline unsigned long get_rand()
      	{
      		if (sizeof(long) == 8)
      			return (unsigned long)rand() << 32 | rand();
      		else
      			return rand();
      	}
      
      	int main(int argc, char **argv)
      	{
      		unsigned int seed = time(0);
      		int loops = 100;
      		int repeats = 1000;
      		unsigned long (*res)[TEST_ENTRIES];
      		unsigned long long elapsed[TEST_ENTRIES];
      		int i, j, k;
      
      		for (;;) {
      			int opt = getopt(argc, argv, "n:r:s:");
      			/* End condition always first */
      			if (opt == -1)
      				break;
      
      			switch (opt) {
      			case 'n':
      				loops = atoi(optarg);
      				break;
      			case 'r':
      				repeats = atoi(optarg);
      				break;
      			case 's':
      				seed = strtoul(optarg, NULL, 10);
      				break;
      			default:
      				/* You won't actually get here. */
      				break;
      			}
      		}
      
      		res = malloc(sizeof(unsigned long) * TEST_ENTRIES * loops);
      		memset(elapsed, 0, sizeof(elapsed));
      
      		srand(seed);
      		for (j = 0; j < loops; j++) {
      			unsigned long a = get_rand();
      			/* Do we have args? */
      			unsigned long b = argc > optind ? strtoul(argv[optind], NULL, 10) : get_rand();
      			unsigned long long min_elapsed[TEST_ENTRIES];
      			for (k = 0; k < repeats; k++) {
      				for (i = 0; i < TEST_ENTRIES; i++) {
      					unsigned long long tmp = benchmark_gcd_func(gcd_func[i], a, b, &res[j][i]);
      					if (k == 0 || min_elapsed[i] > tmp)
      						min_elapsed[i] = tmp;
      				}
      			}
      			for (i = 0; i < TEST_ENTRIES; i++)
      				elapsed[i] += min_elapsed[i];
      		}
      
      		for (i = 0; i < TEST_ENTRIES; i++)
      			printf("gcd%d: elapsed %llu\n", i, elapsed[i]);
      
      		k = 0;
      		srand(seed);
      		for (j = 0; j < loops; j++) {
      			unsigned long a = get_rand();
      			unsigned long b = argc > optind ? strtoul(argv[optind], NULL, 10) : get_rand();
      			for (i = 1; i < TEST_ENTRIES; i++) {
      				if (res[j][i] != res[j][0])
      					break;
      			}
      			if (i < TEST_ENTRIES) {
      				if (k == 0) {
      					k = 1;
      					fprintf(stderr, "Error:\n");
      				}
      				fprintf(stderr, "gcd(%lu, %lu): ", a, b);
      				for (i = 0; i < TEST_ENTRIES; i++)
      					fprintf(stderr, "%ld%s", res[j][i], i < TEST_ENTRIES - 1 ? ", " : "\n");
      			}
      		}
      
      		if (k == 0)
      			fprintf(stderr, "PASS\n");
      
      		free(res);
      
      		return 0;
      	}
      
      Compiled with "-O2", on "VirtualBox 4.4.0-22-generic #38-Ubuntu x86_64" got:
      
        zhaoxiuzeng@zhaoxiuzeng-VirtualBox:~/develop$ ./gcd -r 500000 -n 10
        gcd0: elapsed 10174
        gcd1: elapsed 2120
        gcd2: elapsed 2902
        gcd3: elapsed 2039
        gcd4: elapsed 2812
        PASS
        zhaoxiuzeng@zhaoxiuzeng-VirtualBox:~/develop$ ./gcd -r 500000 -n 10
        gcd0: elapsed 9309
        gcd1: elapsed 2280
        gcd2: elapsed 2822
        gcd3: elapsed 2217
        gcd4: elapsed 2710
        PASS
        zhaoxiuzeng@zhaoxiuzeng-VirtualBox:~/develop$ ./gcd -r 500000 -n 10
        gcd0: elapsed 9589
        gcd1: elapsed 2098
        gcd2: elapsed 2815
        gcd3: elapsed 2030
        gcd4: elapsed 2718
        PASS
        zhaoxiuzeng@zhaoxiuzeng-VirtualBox:~/develop$ ./gcd -r 500000 -n 10
        gcd0: elapsed 9914
        gcd1: elapsed 2309
        gcd2: elapsed 2779
        gcd3: elapsed 2228
        gcd4: elapsed 2709
        PASS
      
      [akpm@linux-foundation.org: avoid #defining a CONFIG_ variable]
      Signed-off-by: NZhaoxiu Zeng <zhaoxiu.zeng@gmail.com>
      Signed-off-by: NGeorge Spelvin <linux@horizon.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fff7fb0b
  24. 09 3月, 2016 1 次提交
  25. 21 1月, 2016 1 次提交
    • C
      dma-mapping: always provide the dma_map_ops based implementation · e1c7e324
      Christoph Hellwig 提交于
      Move the generic implementation to <linux/dma-mapping.h> now that all
      architectures support it and remove the HAVE_DMA_ATTR Kconfig symbol now
      that everyone supports them.
      
      [valentinrothberg@gmail.com: remove leftovers in Kconfig]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Helge Deller <deller@gmx.de>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Steven Miao <realmz6@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Sebastian Ott <sebott@linux.vnet.ibm.com>
      Signed-off-by: NValentin Rothberg <valentinrothberg@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e1c7e324
  26. 17 1月, 2016 1 次提交
  27. 14 12月, 2014 1 次提交
  28. 27 10月, 2014 1 次提交
  29. 09 9月, 2014 2 次提交
  30. 19 7月, 2014 1 次提交
  31. 07 4月, 2014 1 次提交