1. 02 7月, 2016 1 次提交
    • R
      MIPS: Fix possible corruption of cache mode by mprotect. · 6d037de9
      Ralf Baechle 提交于
      The following testcase may result in a page table entries with a invalid
      CCA field being generated:
      
      static void *bindstack;
      
      static int sysrqfd;
      
      static void protect_low(int protect)
      {
      	mprotect(bindstack, BINDSTACK_SIZE, protect);
      }
      
      static void sigbus_handler(int signal, siginfo_t * info, void *context)
      {
      	void *addr = info->si_addr;
      
      	write(sysrqfd, "x", 1);
      
      	printf("sigbus, fault address %p (should not happen, but might)\n",
      	       addr);
      	abort();
      }
      
      static void run_bind_test(void)
      {
      	unsigned int *p = bindstack;
      
      	p[0] = 0xf001f001;
      
      	write(sysrqfd, "x", 1);
      
      	/* Set trap on access to p[0] */
      	protect_low(PROT_NONE);
      
      	write(sysrqfd, "x", 1);
      
      	/* Clear trap on access to p[0] */
      	protect_low(PROT_READ | PROT_WRITE | PROT_EXEC);
      
      	write(sysrqfd, "x", 1);
      
      	/* Check the contents of p[0] */
      	if (p[0] != 0xf001f001) {
      		write(sysrqfd, "x", 1);
      
      		/* Reached, but shouldn't be */
      		printf("badness, shouldn't happen but does\n");
      		abort();
      	}
      }
      
      int main(void)
      {
      	struct sigaction sa;
      
      	sysrqfd = open("/proc/sysrq-trigger", O_WRONLY);
      
      	if (sigprocmask(SIG_BLOCK, NULL, &sa.sa_mask)) {
      		perror("sigprocmask");
      		return 0;
      	}
      
      	sa.sa_sigaction = sigbus_handler;
      	sa.sa_flags = SA_SIGINFO | SA_NODEFER | SA_RESTART;
      	if (sigaction(SIGBUS, &sa, NULL)) {
      		perror("sigaction");
      		return 0;
      	}
      
      	bindstack = mmap(NULL,
      			 BINDSTACK_SIZE,
      			 PROT_READ | PROT_WRITE | PROT_EXEC,
      			 MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
      	if (bindstack == MAP_FAILED) {
      		perror("mmap bindstack");
      		return 0;
      	}
      
      	printf("bindstack: %p\n", bindstack);
      
      	run_bind_test();
      
      	printf("done\n");
      
      	return 0;
      }
      
      There are multiple ingredients for this:
      
       1) PAGE_NONE is defined to _CACHE_CACHABLE_NONCOHERENT, which is CCA 3
          on all platforms except SB1 where it's CCA 5.
       2) _page_cachable_default must have bits set which are not set
          _CACHE_CACHABLE_NONCOHERENT.
       3) Either the defective version of pte_modify for XPA or the standard
          version must be in used.  However pte_modify for the 36 bit address
          space support is no affected.
      
      In that case additional bits in the final CCA mode may generate an invalid
      value for the CCA field.  On the R10000 system where this was tracked
      down for example a CCA 7 has been observed, which is Uncached Accelerated.
      
      Fixed by:
      
       1) Using the proper CCA mode for PAGE_NONE just like for all the other
          PAGE_* pte/pmd bits.
       2) Fix the two affected variants of pte_modify.
      
      Further code inspection also shows the same issue to exist in pmd_modify
      which would affect huge page systems.
      
      Issue in pte_modify tracked down by Alastair Bridgewater, PAGE_NONE
      and pmd_modify issue found by me.
      
      The history of this goes back beyond Linus' git history.  Chris Dearman's
      commit 35133692 ("[MIPS] Allow setting of
      the cache attribute at run time.") missed the opportunity to fix this
      but it was originally introduced in lmo commit
      d523832cf12007b3242e50bb77d0c9e63e0b6518 ("Missing from last commit.")
      and 32cc38229ac7538f2346918a09e75413e8861f87 ("New configuration option
      CONFIG_MIPS_UNCACHED.")
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      Reported-by: NAlastair Bridgewater <alastair.bridgewater@gmail.com>
      6d037de9
  2. 20 5月, 2016 1 次提交
    • H
      arch: fix has_transparent_hugepage() · fd8cfd30
      Hugh Dickins 提交于
      I've just discovered that the useful-sounding has_transparent_hugepage()
      is actually an architecture-dependent minefield: on some arches it only
      builds if CONFIG_TRANSPARENT_HUGEPAGE=y, on others it's also there when
      not, but on some of those (arm and arm64) it then gives the wrong
      answer; and on mips alone it's marked __init, which would crash if
      called later (but so far it has not been called later).
      
      Straighten this out: make it available to all configs, with a sensible
      default in asm-generic/pgtable.h, removing its definitions from those
      arches (arc, arm, arm64, sparc, tile) which are served by the default,
      adding #define has_transparent_hugepage has_transparent_hugepage to
      those (mips, powerpc, s390, x86) which need to override the default at
      runtime, and removing the __init from mips (but maybe that kind of code
      should be avoided after init: set a static variable the first time it's
      called).
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andres Lagar-Cavilla <andreslc@google.com>
      Cc: Yang Shi <yang.shi@linaro.org>
      Cc: Ning Qu <quning@gmail.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Acked-by: Vineet Gupta <vgupta@synopsys.com>		[arch/arc]
      Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>	[arch/s390]
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fd8cfd30
  3. 13 5月, 2016 4 次提交
    • P
      MIPS: mm: Fix MIPS32 36b physical addressing (alchemy, netlogic) · 7b2cb64f
      Paul Burton 提交于
      There are 2 distinct cases in which a kernel for a MIPS32 CPU
      (CONFIG_CPU_MIPS32=y) may use 64 bit physical addresses
      (CONFIG_PHYS_ADDR_T_64BIT=y):
      
        - 36 bit physical addressing as used by RMI Alchemy & Netlogic XLP/XLR
          CPUs.
      
        - MIPS32r5 eXtended Physical Addressing (XPA).
      
      These 2 cases are distinct in that they require different behaviour from
      the kernel - the EntryLo registers have different formats. Until Linux
      v4.1 we only supported the first case, with code conditional upon the 2
      aforementioned Kconfig variables being set. Commit c5b36783 ("MIPS:
      Add support for XPA.") added support for the second case, but did so by
      modifying the code that existed for the first case rather than treating
      the 2 cases as distinct. Since the EntryLo registers have different
      formats this breaks the 36 bit Alchemy/XLP/XLR case. Fix this by
      splitting the 2 cases, with XPA cases now being conditional upon
      CONFIG_XPA and the non-XPA case matching the code as it existed prior to
      commit c5b36783 ("MIPS: Add support for XPA.").
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Reported-by: NManuel Lauss <manuel.lauss@gmail.com>
      Tested-by: NManuel Lauss <manuel.lauss@gmail.com>
      Fixes: c5b36783 ("MIPS: Add support for XPA.")
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: David Daney <david.daney@cavium.com>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: Maciej W. Rozycki <macro@linux-mips.org>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Alex Smith <alex.smith@imgtec.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: stable@vger.kernel.org # v4.1+
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13119/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      7b2cb64f
    • P
      MIPS: mm: Standardise on _PAGE_NO_READ, drop _PAGE_READ · 780602d7
      Paul Burton 提交于
      Ever since support for RI/XI was implemented by commit 6dd9344c
      ("MIPS: Implement Read Inhibit/eXecute Inhibit") we've had a mixture of
      _PAGE_READ & _PAGE_NO_READ bits. Rather than keep both around, switch
      away from using _PAGE_READ to determine page presence & instead invert
      the use to _PAGE_NO_READ. Wherever we formerly had no definition for
      _PAGE_NO_READ, change what was _PAGE_READ to _PAGE_NO_READ. The end
      result is that we consistently use _PAGE_NO_READ to determine whether a
      page is readable, regardless of whether RI/XI is implemented.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Reviewed-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: David Daney <david.daney@cavium.com>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: Maciej W. Rozycki <macro@linux-mips.org>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Alex Smith <alex.smith@imgtec.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/13116/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      780602d7
    • H
      MIPS: Loongson: Add Loongson-3A R2 basic support · b2edcfc8
      Huacai Chen 提交于
      Loongson-3 CPU family:
      
      Code-name       Brand-name       PRId
      Loongson-3A R1  Loongson-3A1000  0x6305
      Loongson-3A R2  Loongson-3A2000  0x6308
      Loongson-3B R1  Loongson-3B1000  0x6306
      Loongson-3B R2  Loongson-3B1500  0x6307
      
      Features of R2 revision of Loongson-3A:
      
        - Primary cache includes I-Cache, D-Cache and V-Cache (Victim Cache).
        - I-Cache, D-Cache and V-Cache are 16-way set-associative, linesize is
           64 bytes.
        - 64 entries of VTLB (classic TLB), 1024 entries of FTLB (8-way
           set-associative).
        - Supports DSP/DSPv2 instructions, UserLocal register and Read-Inhibit/
           Execute-Inhibit.
      
      [ralf@linux-mips.org: Resolved merge conflicts.]
      Signed-off-by: NHuacai Chen <chenhc@lemote.com>
      Cc: Aurelien Jarno <aurelien@aurel32.net>
      Cc: Steven J . Hill <sjhill@realitydiluted.com>
      Cc: Fuxin Zhang <zhangfx@lemote.com>
      Cc: Zhangjin Wu <wuzhangjin@gmail.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/12751/
      Patchwork: https://patchwork.linux-mips.org/patch/13136/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      b2edcfc8
    • P
      MIPS: Sync icache & dcache in set_pte_at · 37d22a0d
      Paul Burton 提交于
      It's possible for pages to become visible prior to update_mmu_cache
      running if a thread within the same address space preempts the current
      thread or runs simultaneously on another CPU. That is, the following
      scenario is possible:
      
          CPU0                            CPU1
      
          write to page
          flush_dcache_page
          flush_icache_page
          set_pte_at
                                          map page
          update_mmu_cache
      
      If CPU1 maps the page in between CPU0's set_pte_at, which marks it valid
      & visible, and update_mmu_cache where the dcache flush occurs then CPU1s
      icache will fill from stale data (unless it fills from the dcache, in
      which case all is good, but most MIPS CPUs don't have this property).
      Commit 4d46a67a ("MIPS: Fix race condition in lazy cache flushing.")
      attempted to fix that by performing the dcache flush in
      flush_icache_page such that it occurs before the set_pte_at call makes
      the page visible. However it has the problem that not all code that
      writes to pages exposed to userland call flush_icache_page. There are
      many callers of set_pte_at under mm/ and only 2 of them do call
      flush_icache_page. Thus the race window between a page becoming visible
      & being coherent between the icache & dcache remains open in some cases.
      
      To illustrate some of the cases, a WARN was added to __update_cache with
      this patch applied that triggered in cases where a page about to be
      flushed from the dcache was not the last page provided to
      flush_icache_page. That is, backtraces were obtained for cases in which
      the race window is left open without this patch. The 2 standout examples
      follow.
      
      When forking a process:
      
      [   15.271842] [<80417630>] __update_cache+0xcc/0x188
      [   15.277274] [<80530394>] copy_page_range+0x56c/0x6ac
      [   15.282861] [<8042936c>] copy_process.part.54+0xd40/0x17ac
      [   15.289028] [<80429f80>] do_fork+0xe4/0x420
      [   15.293747] [<80413808>] handle_sys+0x128/0x14c
      
      When exec'ing an ELF binary:
      
      [   14.445964] [<80417630>] __update_cache+0xcc/0x188
      [   14.451369] [<80538d88>] move_page_tables+0x414/0x498
      [   14.457075] [<8055d848>] setup_arg_pages+0x220/0x318
      [   14.462685] [<805b0f38>] load_elf_binary+0x530/0x12a0
      [   14.468374] [<8055ec3c>] search_binary_handler+0xbc/0x214
      [   14.474444] [<8055f6c0>] do_execveat_common+0x43c/0x67c
      [   14.480324] [<8055f938>] do_execve+0x38/0x44
      [   14.485137] [<80413808>] handle_sys+0x128/0x14c
      
      These code paths write into a page, call flush_dcache_page then call
      set_pte_at without flush_icache_page inbetween. The end result is that
      the icache can become corrupted & userland processes may execute
      unexpected or invalid code, typically resulting in a reserved
      instruction exception, a trap or a segfault.
      
      Fix this race condition fully by performing any cache maintenance
      required to keep the icache & dcache in sync in set_pte_at, before the
      page is made valid. This has the added bonus of ensuring the cache
      maintenance always happens in one location, rather than being duplicated
      in flush_icache_page & update_mmu_cache. It also matches the way other
      architectures solve the same problem (see arm, ia64 & powerpc).
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Reported-by: NIonela Voinescu <ionela.voinescu@imgtec.com>
      Cc: Lars Persson <lars.persson@axis.com>
      Fixes: 4d46a67a ("MIPS: Fix race condition in lazy cache flushing.")
      Cc: Steven J. Hill <sjhill@realitydiluted.com>
      Cc: David Daney <david.daney@cavium.com>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Cc: stable <stable@vger.kernel.org> # v4.1+
      Patchwork: https://patchwork.linux-mips.org/patch/12722/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      37d22a0d
  4. 09 5月, 2016 2 次提交
  5. 24 1月, 2016 1 次提交
  6. 16 1月, 2016 1 次提交
    • K
      mips, thp: remove infrastructure for handling splitting PMDs · b2787370
      Kirill A. Shutemov 提交于
      With new refcounting we don't need to mark PMDs splitting.  Let's drop
      code to handle this.
      
      pmdp_splitting_flush() is not needed too: on splitting PMD we will do
      pmdp_clear_flush() + set_pte_at().  pmdp_clear_flush() will do IPI as
      needed for fast_gup.
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b2787370
  7. 03 9月, 2015 1 次提交
  8. 05 8月, 2015 1 次提交
    • D
      MIPS: Make set_pte() SMP safe. · 46011e6e
      David Daney 提交于
      On MIPS the GLOBAL bit of the PTE must have the same value in any
      aligned pair of PTEs.  These pairs of PTEs are referred to as
      "buddies".  In a SMP system is is possible for two CPUs to be calling
      set_pte() on adjacent PTEs at the same time.  There is a race between
      setting the PTE and a different CPU setting the GLOBAL bit in its
      buddy PTE.
      
      This race can be observed when multiple CPUs are executing
      vmap()/vfree() at the same time.
      
      Make setting the buddy PTE's GLOBAL bit an atomic operation to close
      the race condition.
      
      The case of CONFIG_64BIT_PHYS_ADDR && CONFIG_CPU_MIPS32 is *not*
      handled.
      Signed-off-by: NDavid Daney <david.daney@cavium.com>
      Cc: <stable@vger.kernel.org>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/10835/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      46011e6e
  9. 25 6月, 2015 1 次提交
  10. 25 3月, 2015 1 次提交
  11. 20 3月, 2015 1 次提交
  12. 18 3月, 2015 1 次提交
    • S
      MIPS: Rearrange PTE bits into fixed positions. · be0c37c9
      Steven J. Hill 提交于
      This patch rearranges the PTE bits into fixed positions for R2
      and later cores. In the past, the TLB handling code did runtime
      checking of RI/XI and adjusted the shifts and rotates in order
      to fit the largest PFN value into the PTE. The checking now
      occurs when building the TLB handler, thus eliminating those
      checks. These new arrangements also define the largest possible
      PFN value that can fit in the PTE. HUGE page support is only
      available for 64-bit cores. Layouts of the PTE bits are now:
      
         64-bit, R1 or earlier:     CCC D V G [S H] M A W R P
         32-bit, R1 or earler:      CCC D V G M A W R P
         64-bit, R2 or later:       CCC D V G RI/R XI [S H] M A W P
         32-bit, R2 or later:       CCC D V G RI/R XI M A W P
      
      [ralf@linux-mips.org: Fix another build error *rant* *rant*]
      Signed-off-by: NSteven J. Hill <Steven.Hill@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/9353/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      be0c37c9
  13. 20 2月, 2015 1 次提交
  14. 16 2月, 2015 3 次提交
  15. 11 2月, 2015 1 次提交
  16. 25 11月, 2014 3 次提交
  17. 22 9月, 2014 1 次提交
  18. 26 8月, 2014 1 次提交
    • L
      MIPS: Remove race window in page fault handling · 4b34cdde
      Lars Persson 提交于
      Multicore MIPSes without I/D hardware coherency suffered from a race
      condition in the page fault handler. The page table entry was
      published before any pending lazy D-cache flush was committed, hence
      it allowed execution of stale page cache data by other VPEs in the
      system.
      
      To make the cache handling safe we need to perform flushing already in
      the set_pte_at function. MIPSes without coherent I-caches can get a
      small increase in flushes due to the unavailability of the execute
      flag in set_pte_at.
      
      [ralf@linux-mips.org: outlining set_pte_at() saves a good k in a test
      build, so I moved its definition from pgtable.h to cache.c.]
      Signed-off-by: NLars Persson <larper@axis.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/7511/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      4b34cdde
  19. 19 8月, 2014 1 次提交
    • L
      MIPS: Remove race window in page fault handling · 2a4a8b1e
      Lars Persson 提交于
      Multicore MIPSes without I/D hardware coherency suffered from a race
      condition in the page fault handler. The page table entry was
      published before any pending lazy D-cache flush was committed, hence
      it allowed execution of stale page cache data by other VPEs in the
      system.
      
      To make the cache handling safe we need to perform flushing already in
      the set_pte_at function. MIPSes without coherent I-caches can get a
      small increase in flushes due to the unavailability of the execute
      flag in set_pte_at.
      
      [ralf@linux-mips.org: outlining set_pte_at() saves a good k in a test
      build, so I moved its definition from pgtable.h to cache.c.]
      Signed-off-by: NLars Persson <larper@axis.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/7511/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      2a4a8b1e
  20. 02 8月, 2014 1 次提交
    • M
      MIPS: mm: Use the Hardware Page Table Walker if the core supports it · f1014d1b
      Markos Chandras 提交于
      The Hardware Page Table Walker aims to speed up TLB refill exceptions
      by handling them in the hardware level instead of having a software
      TLB refill handler. However, a TLB refill exception can still be
      thrown in certain cases such as, synchronus exceptions, or address
      translation or memory errors during the HTW operation. As a result of
      which, HTW must not be considered a complete replacement for the TLB
      refill software handler, but rather a fast-path for it.
      For HTW to work, the PWBase register must contain the task's page
      global directory address so the HTW will kick in on TLB refill
      exceptions.
      
      Due to HTW being a separate engine embedded deep in the CPU pipeline,
      we need to restart the HTW everytime a PTE changes to avoid HTW
      fetching a old entry from the page tables. It's also necessary to
      restart the HTW on context switches to prevent it from fetching a
      page from the previous process. Finally, since HTW is using the
      entryhi register to write the translations to the TLB, it's necessary
      to stop the HTW whenever the entryhi changes (eg for tlb probe
      perations) and enable it back afterwards.
      
      == Performance ==
      
      The following trivial test was used to measure the performance of the
      HTW. Using the same root filesystem, the following command was used
      to measure the number of tlb refill handler executions with and
      without (using 'nohtw' kernel parameter) HTW support.  The kernel was
      modified to use a scratch register as a counter for the TLB refill
      exceptions.
      
      find /usr -type f -exec ls -lh {} \;
      
      HTW Enabled:
      TLB refill exceptions: 12306
      
      HTW Disabled:
      TLB refill exceptions: 17805
      Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Cc: Markos Chandras <markos.chandras@imgtec.com>
      Patchwork: https://patchwork.linux-mips.org/patch/7336/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      f1014d1b
  21. 28 5月, 2014 1 次提交
  22. 29 6月, 2013 1 次提交
  23. 08 4月, 2013 1 次提交
  24. 01 2月, 2013 1 次提交
  25. 13 12月, 2012 1 次提交
  26. 12 12月, 2012 1 次提交
  27. 26 11月, 2012 1 次提交
  28. 14 9月, 2012 1 次提交
  29. 26 7月, 2011 1 次提交
    • J
      MIPS: topdown mmap support · d0be89f6
      Jian Peng 提交于
      This patch introduced topdown mmap support in user process address
      space allocation policy.
      
      Recently, we ran some large applications that use mmap heavily and
      lead to OOM due to inflexible mmap allocation policy on MIPS32.
      
      Since most other major archs supported it for years, it is reasonable
      to follow the trend and reduce the pain of porting applications.
      
      Due to cache aliasing concern, arch_get_unmapped_area_topdown() and
      other helper functions are implemented in arch/mips/kernel/syscall.c.
      Signed-off-by: NJian Peng <jipeng2005@gmail.com>
      Cc: David Daney <ddaney@caviumnetworks.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/2389/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      d0be89f6
  30. 27 2月, 2010 2 次提交
  31. 21 2月, 2010 1 次提交
    • R
      MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself · 4b3073e1
      Russell King 提交于
      On VIVT ARM, when we have multiple shared mappings of the same file
      in the same MM, we need to ensure that we have coherency across all
      copies.  We do this via make_coherent() by making the pages
      uncacheable.
      
      This used to work fine, until we allowed highmem with highpte - we
      now have a page table which is mapped as required, and is not available
      for modification via update_mmu_cache().
      
      Ralf Beache suggested getting rid of the PTE value passed to
      update_mmu_cache():
      
        On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables
        to construct a pointer to the pte again.  Passing a pte_t * is much
        more elegant.  Maybe we might even replace the pte argument with the
        pte_t?
      
      Ben Herrenschmidt would also like the pte pointer for PowerPC:
      
        Passing the ptep in there is exactly what I want.  I want that
        -instead- of the PTE value, because I have issue on some ppc cases,
        for I$/D$ coherency, where set_pte_at() may decide to mask out the
        _PAGE_EXEC.
      
      So, pass in the mapped page table pointer into update_mmu_cache(), and
      remove the PTE value, updating all implementations and call sites to
      suit.
      
      Includes a fix from Stephen Rothwell:
      
        sparc: fix fallout from update_mmu_cache API change
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4b3073e1