1. 29 8月, 2013 1 次提交
  2. 26 8月, 2013 1 次提交
    • V
      ARC: Exception Handlers Code consolidation · 37f3ac49
      Vineet Gupta 提交于
      After the recent cleanups, all the exception handlers now have same
      boilerplate prologue code. Move that into common macro.
      
      This reduces readability but helps greatly with sharing / duplicating
      entry code with ARCv2 ISA where the handlers are pretty much the same,
      just the entry prologue is different (due to hardware assist).
      
      Also while at it, add the missing FAKE_RET_FROM_EXCPN calls in couple of
      places to drop down to pure kernel mode (from exception mode) before
      jumping off into "C" code.
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      37f3ac49
  3. 10 7月, 2013 1 次提交
    • J
      mm: invoke oom-killer from remaining unconverted page fault handlers · 609838cf
      Johannes Weiner 提交于
      A few remaining architectures directly kill the page faulting task in an
      out of memory situation.  This is usually not a good idea since that
      task might not even use a significant amount of memory and so may not be
      the optimal victim to resolve the situation.
      
      Since 2.6.29's 1c0fe6e3 ("mm: invoke oom-killer from page fault") there
      is a hook that architecture page fault handlers are supposed to call to
      invoke the OOM killer and let it pick the right task to kill.  Convert
      the remaining architectures over to this hook.
      
      To have the previous behavior of simply taking out the faulting task the
      vm.oom_kill_allocating_task sysctl can be set to 1.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NMichal Hocko <mhocko@suse.cz>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Acked-by: Vineet Gupta <vgupta@synopsys.com>   [arch/arc bits]
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Chen Liqin <liqin.chen@sunplusct.com>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      609838cf
  4. 04 7月, 2013 4 次提交
    • J
      mm/ARC: prepare for removing num_physpages and simplify mem_init() · de35e1b8
      Jiang Liu 提交于
      Prepare for removing num_physpages and simplify mem_init().
      Signed-off-by: NJiang Liu <jiang.liu@huawei.com>
      Acked-by: Vineet Gupta <vgupta@synopsys.com>   # for arch/arc
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Rob Herring <rob.herring@calxeda.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      de35e1b8
    • J
      mm: concentrate modification of totalram_pages into the mm core · 0c988534
      Jiang Liu 提交于
      Concentrate code to modify totalram_pages into the mm core, so the arch
      memory initialized code doesn't need to take care of it.  With these
      changes applied, only following functions from mm core modify global
      variable totalram_pages: free_bootmem_late(), free_all_bootmem(),
      free_all_bootmem_node(), adjust_managed_page_count().
      
      With this patch applied, it will be much more easier for us to keep
      totalram_pages and zone->managed_pages in consistence.
      Signed-off-by: NJiang Liu <jiang.liu@huawei.com>
      Acked-by: NDavid Howells <dhowells@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: <sworddragon2@aol.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Jianguo Wu <wujianguo@huawei.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0c988534
    • J
      mm: enhance free_reserved_area() to support poisoning memory with zero · dbe67df4
      Jiang Liu 提交于
      Address more review comments from last round of code review.
      1) Enhance free_reserved_area() to support poisoning freed memory with
         pattern '0'. This could be used to get rid of poison_init_mem()
         on ARM64.
      2) A previous patch has disabled memory poison for initmem on s390
         by mistake, so restore to the original behavior.
      3) Remove redundant PAGE_ALIGN() when calling free_reserved_area().
      Signed-off-by: NJiang Liu <jiang.liu@huawei.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: <sworddragon2@aol.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Jianguo Wu <wujianguo@huawei.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dbe67df4
    • J
      mm: change signature of free_reserved_area() to fix building warnings · 11199692
      Jiang Liu 提交于
      Change signature of free_reserved_area() according to Russell King's
      suggestion to fix following build warnings:
      
        arch/arm/mm/init.c: In function 'mem_init':
        arch/arm/mm/init.c:603:2: warning: passing argument 1 of 'free_reserved_area' makes integer from pointer without a cast [enabled by default]
          free_reserved_area(__va(PHYS_PFN_OFFSET), swapper_pg_dir, 0, NULL);
          ^
        In file included from include/linux/mman.h:4:0,
                         from arch/arm/mm/init.c:15:
        include/linux/mm.h:1301:22: note: expected 'long unsigned int' but argument is of type 'void *'
         extern unsigned long free_reserved_area(unsigned long start, unsigned long end,
      
         mm/page_alloc.c: In function 'free_reserved_area':
      >> mm/page_alloc.c:5134:3: warning: passing argument 1 of 'virt_to_phys' makes pointer from integer without a cast [enabled by default]
         In file included from arch/mips/include/asm/page.h:49:0,
                          from include/linux/mmzone.h:20,
                          from include/linux/gfp.h:4,
                          from include/linux/mm.h:8,
                          from mm/page_alloc.c:18:
         arch/mips/include/asm/io.h:119:29: note: expected 'const volatile void *' but argument is of type 'long unsigned int'
         mm/page_alloc.c: In function 'free_area_init_nodes':
         mm/page_alloc.c:5030:34: warning: array subscript is below array bounds [-Warray-bounds]
      
      Also address some minor code review comments.
      Signed-off-by: NJiang Liu <jiang.liu@huawei.com>
      Reported-by: NArnd Bergmann <arnd@arndb.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: <sworddragon2@aol.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Jianguo Wu <wujianguo@huawei.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      11199692
  5. 27 6月, 2013 3 次提交
    • P
      arc: delete __cpuinit usage from all arc files · ce759956
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      are flagged as __cpuinit  -- so if we remove the __cpuinit from
      arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      content into no-ops as early as possible, since that will get rid
      of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the arch/arc uses of the __cpuinit macros from
      all C files.  Currently arc does not have any __CPUINIT used in
      assembly files.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      ce759956
    • V
      ARC: [tlb-miss] Fix bug with CONFIG_ARC_DBG_TLB_MISS_COUNT · dc81df24
      Vineet Gupta 提交于
      LOAD_FAULT_PTE macro is expected to set r2 with faulting vaddr.
      However in case of CONFIG_ARC_DBG_TLB_MISS_COUNT, it was getting
      clobbered with statistics collection code.
      
      Fix latter by using a different register.
      
      Note that only I-TLB Miss handler was potentially affected.
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      dc81df24
    • V
      ARC: [tlb-miss] Extraneous PTE bit testing/setting · c3e757a7
      Vineet Gupta 提交于
      * No need to check for READ access in I-TLB Miss handler
      
      * Redundant PAGE_PRESENT update in PTE
      
      Post TLB entry installation, in updating PTE for software accessed/dity
      bits, no need to update PAGE_PRESENT since it will already be set.
      Infact the entry won't have installed if !PAGE_PRESENT.
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      c3e757a7
  6. 26 6月, 2013 1 次提交
  7. 22 6月, 2013 9 次提交
  8. 25 5月, 2013 1 次提交
    • V
      ARC: lazy dcache flush broke gdb in non-aliasing configs · 7bb66f6e
      Vineet Gupta 提交于
      gdbserver inserting a breakpoint ends up calling copy_user_page() for a
      code page. The generic version of which (non-aliasing config) didn't set
      the PG_arch_1 bit hence update_mmu_cache() didn't sync dcache/icache for
      corresponding dynamic loader code page - causing garbade to be executed.
      
      So now aliasing versions of copy_user_highpage()/clear_page() are made
      default. There is no significant overhead since all of special alias
      handling code is compiled out for non-aliasing build
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      7bb66f6e
  9. 23 5月, 2013 3 次提交
    • V
      ARC: Brown paper bag bug in macro for checking cache color · 3e87974d
      Vineet Gupta 提交于
      The VM_EXEC check in update_mmu_cache() was getting optimized away
      because of a stupid error in definition of macro addr_not_cache_congruent()
      
      The intention was to have the equivalent of following:
      
      	if (a || (1 ? b : 0))
      
      but we ended up with following:
      
      	if (a || 1 ? b : 0)
      
      And because precedence of '||' is more that that of '?', gcc was optimizing
      away evaluation of <a>
      
      Nasty Repercussions:
      1. For non-aliasing configs it would mean some extraneous dcache flushes
         for non-code pages if U/K mappings were not congruent.
      2. For aliasing config, some needed dcache flush for code pages might
         be missed if U/K mappings were congruent.
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      3e87974d
    • V
      ARC: copy_(to|from)_user() to honor usermode-access permissions · a950549c
      Vineet Gupta 提交于
      This manifested as grep failing psuedo-randomly:
      
      -------------->8---------------------
      [ARCLinux]$ ip address show lo | grep inet
      [ARCLinux]$ ip address show lo | grep inet
      [ARCLinux]$ ip address show lo | grep inet
      [ARCLinux]$
      [ARCLinux]$ ip address show lo | grep inet
          inet 127.0.0.1/8 scope host lo
      -------------->8---------------------
      
      ARC700 MMU provides fully orthogonal permission bits per page:
      Ur, Uw, Ux, Kr, Kw, Kx
      
      The user mode page permission templates used to have all Kernel mode
      access bits enabled.
      This caused a tricky race condition observed with uClibc buffered file
      read and UNIX pipes.
      
      1. Read access to an anon mapped page in libc .bss: write-protected
         zero_page mapped: TLB Entry installed with Ur + K[rwx]
      
      2. grep calls libc:getc() -> buffered read layer calls read(2) with the
         internal read buffer in same .bss page.
         The read() call is on STDIN which has been redirected to a pipe.
         read(2) => sys_read() => pipe_read() => copy_to_user()
      
      3. Since page has Kernel-write permission (despite being user-mode
         write-protected), copy_to_user() suceeds w/o taking a MMU TLB-Miss
         Exception (page-fault for ARC). core-MM is unaware that kernel
         erroneously wrote to the reserved read-only zero-page (BUG #1)
      
      4. Control returns to userspace which now does a write to same .bss page
         Since Linux MM is not aware that page has been modified by kernel, it
         simply reassigns a new writable zero-init page to mapping, loosing the
         prior write by kernel - effectively zero'ing out the libc read buffer
         under the hood - hence grep doesn't see right data (BUG #2)
      
      The fix is to make all kernel-mode access permissions mirror the
      user-mode ones. Note that the kernel still has full access to pages,
      when accessed directly (w/o MMU) - this fix ensures that kernel-mode
      access in copy_to_from() path uses the same faulting access model as for
      pure user accesses to keep MM fully aware of page state.
      
      The issue is peudo-random because it only shows up if the TLB entry
      installed in #1 is present at the time of #3. If it is evicted out, due
      to TLB pressure or some-such, then copy_to_user() does take a TLB Miss
      Exception, with a routine write-to-anon COW processing installing a
      fresh page for kernel writes and also usable as it is in userspace.
      
      Further the issue was dormant for so long as it depends on where the
      libc internal read buffer (in .bss) is mapped at runtime.
      If it happens to reside in file-backed data mapping of libc (in the
      page-aligned slack space trailing the file backed data), loader zero
      padding the slack space, does the early cow page replacement, setting
      things up at the very beginning itself.
      
      With gcc 4.8 based builds, the libc buffer got pushed out to a real
      anon mapping which triggers the issue.
      Reported-by: NAnton Kolesov <akolesov@synopsys.com>
      Cc: <stable@vger.kernel.org> # 3.9
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      a950549c
    • V
      ARC: [mm] Prevent stray dcache lines after__sync_icache_dcach() · f538881c
      Vineet Gupta 提交于
      Flush and INVALIDATE the dcache page.
      
      This helper is only used for writeback of CODE pages to memory. So
      there's no value in keeping the dcache lines around. Infact it is risky
      as a writeback on natural eviction under pressure can cause un-needed
      writeback with weird issues on aliasing dcache configurations.
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      f538881c
  10. 10 5月, 2013 4 次提交
  11. 09 5月, 2013 2 次提交
  12. 07 5月, 2013 7 次提交
    • V
      ARC: [mm] Lazy D-cache flush (non aliasing VIPT) · eacd0e95
      Vineet Gupta 提交于
      flush_dcache_page( ) is MM hook to ensure that a page has consistent
      views between kernel and userspace. Thus it is called when
      
      * kernel writes to a page which at some later point could get mapped to
        userspace (so kernel mapping needs to be flushed-n-inv)
      * kernel is about to read from a page with possible userspace mappings
        (so userspace mappings needs to be made coherent with kernel ones)
      
      However for Non aliasing VIPT dcache, any userspace mapping will always
      be congruent to kernel mapping. Thus d-cache need need not be flushed at
      all (or delayed indefinitely).
      
      The only reason it does need to be flushed is when mapping code pages.
      Since icache doesn't snoop dcache, those dirty dcache lines need to be
      written back to memory and icache line invalidated so that icache lines
      fetch will get the right data.
      
      Decent gains on LMBench fork/exec/sh and File I/O micro-benchmarks.
      
      (1) FPGA @ 80 MHZ
      
      Processor, Processes - times in microseconds - smaller is better
      ------------------------------------------------------------------------------
      Host                 OS  Mhz null null      open slct sig  sig  fork exec sh
                                   call  I/O stat clos TCP  inst hndl proc proc proc
      --------- ------------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
      3.9-rc6-a Linux 3.9.0-r   80 4.79 8.72 66.7 116. 239. 8.39 30.4 4798 14.K 34.K
      3.9-rc6-b Linux 3.9.0-r   80 4.79 8.62 65.4 111. 239. 8.35 29.0 3995 12.K 30.K
      3.9-rc7-c Linux 3.9.0-r   80 4.79 9.00 66.1 106. 239. 8.61 30.4 2858 10.K 24.K
                                                                      ^^^^ ^^^^ ^^^
      
      File & VM system latencies in microseconds - smaller is better
      -------------------------------------------------------------------------------
      Host                 OS   0K File      10K File     Mmap    Prot   Page 100fd
                              Create Delete Create Delete Latency Fault  Fault selct
      --------- ------------- ------ ------ ------ ------ ------- ----- ------- -----
      3.9-rc6-a Linux 3.9.0-r  317.8  204.2 1122.3  375.1 3522.0 4.288     20.7 126.8
      3.9-rc6-b Linux 3.9.0-r  298.7  223.0 1141.6  367.8 3531.0 4.866     20.9 126.4
      3.9-rc7-c Linux 3.9.0-r  278.4  179.2  862.1  339.3 3705.0 3.223     20.3 126.6
                               ^^^^^  ^^^^^  ^^^^^  ^^^^
      
      (2) Customer Silicon @ 500 MHz (166 MHz mem)
      
      ------------------------------------------------------------------------------
      Host                 OS  Mhz null null      open slct sig  sig  fork exec sh
                                   call  I/O stat clos TCP  inst hndl proc proc proc
      --------- ------------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
      abilis-ba Linux 3.9.0-r  497 0.71 1.38 4.58 12.0 35.5 1.40 3.89 2070 5525 13.K
      abilis-ca Linux 3.9.0-r  497 0.71 1.40 4.61 11.8 35.6 1.37 3.92 1411 4317 10.K
                                                                      ^^^^ ^^^^ ^^^
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      eacd0e95
    • V
      ARC: [mm] micro-optimize page size icache invalidate · 764531cc
      Vineet Gupta 提交于
      start address is already page aligned and size is const PAGE_SIZE,
      thus fixups for alignment not needed in generated code.
      
      bloat-o-meter vmlinux-mm5 vmlinux
      add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-32 (-32)
      function                                     old     new   delta
      __inv_icache_page                             82      50     -32
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      764531cc
    • V
      ARC: [mm] remove the pessimistic all-alias-invalidate icache helpers · 7f250a0f
      Vineet Gupta 提交于
      No users of this code anymore - so RIP !
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      7f250a0f
    • V
      ARC: [mm] consolidate icache/dcache sync code · 94bad1af
      Vineet Gupta 提交于
      Now that we have same helper used for all icache invalidates (i.e.
      vaddr+paddr based exact line invalidate), consolidate the open coded
      calls into one place.
      
      Also rename flush_icache_range_vaddr => __sync_icache_dcache
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      94bad1af
    • V
      ARC: [mm] optimise icache flush for kernel mappings · 7586bf72
      Vineet Gupta 提交于
      This change continues the theme from prev commit - this time icache
      handling for kernel's own code modification (vmalloc: loadable modules,
      breakpoints for kprobes/kgdb...)
      
      flush_icache_range() calls the CDU icache helper with vaddr to enable
      exact line invalidate.
      
      For a true kernel-virtual mapping, the vaddr is actually virtual hence
      valid as index into cache. For kprobes breakpoint however, the vaddr arg
      is actually paddr - since that's how normal kernel is mapped in ARC
      memory map.  This implies that CDU will use the same addr for
      indexing as for tag match - which is fine since kernel code would only
      have that "implicit" mapping and none other.
      
      This should speed up module loading significantly - specially on default
      ARC700 icache configurations (32k) which alias.
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      7586bf72
    • V
      ARC: [mm] optimise icache flush for user mappings · 24603fdd
      Vineet Gupta 提交于
      ARC icache doesn't snoop dcache thus executable pages need to be made
      coherent before mapping into userspace in flush_icache_page().
      
      However ARC700 CDU (hardware cache flush module) requires both vaddr
      (index in cache) as well as paddr (tag match) to correctly identify a
      line in the VIPT cache. A typical ARC700 SoC has aliasing icache, thus
      the paddr only based flush_icache_page() API couldn't be implemented
      efficiently. It had to loop thru all possible alias indexes and perform
      the invalidate operation (ofcourse the cache op would only succeed at
      the index(es) where tag matches - typically only 1, but the cost of
      visiting all the cache-bins needs to paid nevertheless).
      
      Turns out however that the vaddr (along with paddr) is available in
      update_mmu_cache() hence better suits ARC icache flush semantics.
      With both vaddr+paddr, exactly one flush operation per line is done.
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      24603fdd
    • N
      e3edeb67
  13. 30 4月, 2013 1 次提交
  14. 09 4月, 2013 2 次提交