1. 18 3月, 2016 1 次提交
  2. 11 3月, 2016 1 次提交
  3. 16 1月, 2016 1 次提交
  4. 11 1月, 2016 1 次提交
  5. 17 8月, 2015 1 次提交
  6. 19 5月, 2015 2 次提交
  7. 30 1月, 2015 1 次提交
    • L
      vm: add VM_FAULT_SIGSEGV handling support · 33692f27
      Linus Torvalds 提交于
      The core VM already knows about VM_FAULT_SIGBUS, but cannot return a
      "you should SIGSEGV" error, because the SIGSEGV case was generally
      handled by the caller - usually the architecture fault handler.
      
      That results in lots of duplication - all the architecture fault
      handlers end up doing very similar "look up vma, check permissions, do
      retries etc" - but it generally works.  However, there are cases where
      the VM actually wants to SIGSEGV, and applications _expect_ SIGSEGV.
      
      In particular, when accessing the stack guard page, libsigsegv expects a
      SIGSEGV.  And it usually got one, because the stack growth is handled by
      that duplicated architecture fault handler.
      
      However, when the generic VM layer started propagating the error return
      from the stack expansion in commit fee7e49d ("mm: propagate error
      from stack expansion even for guard page"), that now exposed the
      existing VM_FAULT_SIGBUS result to user space.  And user space really
      expected SIGSEGV, not SIGBUS.
      
      To fix that case, we need to add a VM_FAULT_SIGSEGV, and teach all those
      duplicate architecture fault handlers about it.  They all already have
      the code to handle SIGSEGV, so it's about just tying that new return
      value to the existing code, but it's all a bit annoying.
      
      This is the mindless minimal patch to do this.  A more extensive patch
      would be to try to gather up the mostly shared fault handling logic into
      one generic helper routine, and long-term we really should do that
      cleanup.
      
      Just from this patch, you can generally see that most architectures just
      copied (directly or indirectly) the old x86 way of doing things, but in
      the meantime that original x86 model has been improved to hold the VM
      semaphore for shorter times etc and to handle VM_FAULT_RETRY and other
      "newer" things, so it would be a good idea to bring all those
      improvements to the generic case and teach other architectures about
      them too.
      Reported-and-tested-by: NTakashi Iwai <tiwai@suse.de>
      Tested-by: NJan Engelhardt <jengelh@inai.de>
      Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> # "s390 still compiles and boots"
      Cc: linux-arch@vger.kernel.org
      Cc: stable@vger.kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      33692f27
  8. 21 10月, 2014 3 次提交
  9. 14 8月, 2014 6 次提交
    • M
      xtensa: support highmem in aliasing cache flushing code · 270eec76
      Max Filippov 提交于
      Use __flush_invalidate_dcache_page_alias with alias set to color of the
      page physical address instead of __flush_invalidate_dcache_page: this
      works for high memory pages and mapping/unmapping to the TLBTEMP area is
      virtually free.
      
      Allow building configurations with aliasing cache and highmem enabled.
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      270eec76
    • M
      xtensa: support aliasing cache in kmap · 8504b503
      Max Filippov 提交于
      Define ARCH_PKMAP_COLORING and provide corresponding macro definitions
      on cores with aliasing data cache.
      
      Instead of single last_pkmap_nr maintain an array last_pkmap_nr_arr of
      pkmap counters for each page color. Make sure that kmap maps physical
      page at virtual address with color matching its physical address.
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      8504b503
    • M
      xtensa: support aliasing cache in k[un]map_atomic · 32544d9c
      Max Filippov 提交于
      Map high memory pages at virtual addresses with color that match color
      of their physical address. Existing cache alias management mechanisms
      may be used with such pages.
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      32544d9c
    • M
      xtensa: implement clear_user_highpage and copy_user_highpage · a91902db
      Max Filippov 提交于
      Existing clear_user_page and copy_user_page cannot be used with highmem
      because they calculate physical page address from its virtual address
      and do it incorrectly in case of high memory page mapped with
      kmap_atomic. Also kmap is not needed, as most likely userspace mapping
      color would be different from the kmapped color.
      
      Provide clear_user_highpage and copy_user_highpage functions that
      determine if temporary mapping is needed for the pages. Move most of the
      logic of the former clear_user_page and copy_user_page to
      xtensa/mm/cache.c only leaving temporary mapping setup, invalidation and
      clearing/copying in the xtensa/mm/misc.S. Rename these functions to
      clear_page_alias and copy_page_alias.
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      a91902db
    • M
      xtensa: allow fixmap and kmap span more than one page table · dec7305d
      Max Filippov 提交于
      To support aliasing cache both kmap region sizes are multiplied by the
      number of data cache colors. After that expansion page tables that cover
      kmap regions may become larger than one page. Correctly allocate and
      initialize page tables in this case.
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      dec7305d
    • M
      xtensa: make fixmap region addressing grow with index · 22def768
      Max Filippov 提交于
      It's much easier to reason about alignment and coloring of regions
      located in the fixmap when fixmap index is just a PFN within the fixmap
      region. Change fixmap addressing so that index 0 corresponds to
      FIXADDR_START instead of the FIXADDR_TOP.
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      22def768
  10. 10 6月, 2014 1 次提交
  11. 07 4月, 2014 2 次提交
  12. 02 4月, 2014 5 次提交
  13. 22 2月, 2014 1 次提交
  14. 19 1月, 2014 1 次提交
  15. 15 1月, 2014 3 次提交
  16. 15 11月, 2013 1 次提交
  17. 13 9月, 2013 1 次提交
  18. 08 7月, 2013 1 次提交
  19. 04 7月, 2013 4 次提交
    • J
      mm/xtensa: prepare for removing num_physpages and simplify mem_init() · 808c2c37
      Jiang Liu 提交于
      Prepare for removing num_physpages and simplify mem_init().
      Signed-off-by: NJiang Liu <jiang.liu@huawei.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      808c2c37
    • J
      mm: concentrate modification of totalram_pages into the mm core · 0c988534
      Jiang Liu 提交于
      Concentrate code to modify totalram_pages into the mm core, so the arch
      memory initialized code doesn't need to take care of it.  With these
      changes applied, only following functions from mm core modify global
      variable totalram_pages: free_bootmem_late(), free_all_bootmem(),
      free_all_bootmem_node(), adjust_managed_page_count().
      
      With this patch applied, it will be much more easier for us to keep
      totalram_pages and zone->managed_pages in consistence.
      Signed-off-by: NJiang Liu <jiang.liu@huawei.com>
      Acked-by: NDavid Howells <dhowells@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: <sworddragon2@aol.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Jianguo Wu <wujianguo@huawei.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0c988534
    • J
      mm: enhance free_reserved_area() to support poisoning memory with zero · dbe67df4
      Jiang Liu 提交于
      Address more review comments from last round of code review.
      1) Enhance free_reserved_area() to support poisoning freed memory with
         pattern '0'. This could be used to get rid of poison_init_mem()
         on ARM64.
      2) A previous patch has disabled memory poison for initmem on s390
         by mistake, so restore to the original behavior.
      3) Remove redundant PAGE_ALIGN() when calling free_reserved_area().
      Signed-off-by: NJiang Liu <jiang.liu@huawei.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: <sworddragon2@aol.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Jianguo Wu <wujianguo@huawei.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dbe67df4
    • J
      mm: change signature of free_reserved_area() to fix building warnings · 11199692
      Jiang Liu 提交于
      Change signature of free_reserved_area() according to Russell King's
      suggestion to fix following build warnings:
      
        arch/arm/mm/init.c: In function 'mem_init':
        arch/arm/mm/init.c:603:2: warning: passing argument 1 of 'free_reserved_area' makes integer from pointer without a cast [enabled by default]
          free_reserved_area(__va(PHYS_PFN_OFFSET), swapper_pg_dir, 0, NULL);
          ^
        In file included from include/linux/mman.h:4:0,
                         from arch/arm/mm/init.c:15:
        include/linux/mm.h:1301:22: note: expected 'long unsigned int' but argument is of type 'void *'
         extern unsigned long free_reserved_area(unsigned long start, unsigned long end,
      
         mm/page_alloc.c: In function 'free_reserved_area':
      >> mm/page_alloc.c:5134:3: warning: passing argument 1 of 'virt_to_phys' makes pointer from integer without a cast [enabled by default]
         In file included from arch/mips/include/asm/page.h:49:0,
                          from include/linux/mmzone.h:20,
                          from include/linux/gfp.h:4,
                          from include/linux/mm.h:8,
                          from mm/page_alloc.c:18:
         arch/mips/include/asm/io.h:119:29: note: expected 'const volatile void *' but argument is of type 'long unsigned int'
         mm/page_alloc.c: In function 'free_area_init_nodes':
         mm/page_alloc.c:5030:34: warning: array subscript is below array bounds [-Warray-bounds]
      
      Also address some minor code review comments.
      Signed-off-by: NJiang Liu <jiang.liu@huawei.com>
      Reported-by: NArnd Bergmann <arnd@arndb.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: <sworddragon2@aol.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Jianguo Wu <wujianguo@huawei.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      11199692
  20. 06 6月, 2013 1 次提交
    • M
      xtensa: flush TLB entries for pages of non-current mm correctly · 87962c4d
      Max Filippov 提交于
      Sometimes under high memory pressure one process gets a page of another
      process, which manifests itself with an invalid instruction exception.
      
      This happens because flush_tlb_page fails to clear TLB entries when
      called with vma that does not belong to current mm, because it does not
      set RASID appropriately. When page reclaiming mechanism swaps physical
      pages out replacing their PTEs with none or swap PTEs, it calls
      flush_tlb_page. Later physical page may be reused elsewhere, but the
      stale TLB mapping still refers to it, allowing process that owned the
      mapping to see the new state of that physical page.
      
      Put ASID of the mm that owns vma to the RASID to fix that issue.
      Also replace otherwise meaningless local_save_flags with local_irq_save.
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      Signed-off-by: NChris Zankel <chris@zankel.net>
      87962c4d
  21. 09 5月, 2013 1 次提交
    • M
      xtensa: add MMU v3 support · e85e335f
      Max Filippov 提交于
      MMUv3 comes out of reset with identity vaddr -> paddr mapping in the TLB
      way 6:
      
      Way 6 (512 MB)
              Vaddr       Paddr       ASID  Attr RWX Cache
              ----------  ----------  ----  ---- --- -------
              0x00000000  0x00000000  0x01  0x03 RWX Bypass
              0x20000000  0x20000000  0x01  0x03 RWX Bypass
              0x40000000  0x40000000  0x01  0x03 RWX Bypass
              0x60000000  0x60000000  0x01  0x03 RWX Bypass
              0x80000000  0x80000000  0x01  0x03 RWX Bypass
              0xa0000000  0xa0000000  0x01  0x03 RWX Bypass
              0xc0000000  0xc0000000  0x01  0x03 RWX Bypass
              0xe0000000  0xe0000000  0x01  0x03 RWX Bypass
      
      This patch adds remapping code at the reset vector or at the kernel
      _start (depending on CONFIG_INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX) that
      reconfigures MMUv3 as MMUv2:
      
      Way 5 (128 MB)
              Vaddr       Paddr       ASID  Attr RWX Cache
              ----------  ----------  ----  ---- --- -------
              0xd0000000  0x00000000  0x01  0x07 RWX WB
              0xd8000000  0x00000000  0x01  0x03 RWX Bypass
      Way 6 (256 MB)
              Vaddr       Paddr       ASID  Attr RWX Cache
              ----------  ----------  ----  ---- --- -------
              0xe0000000  0xf0000000  0x01  0x07 RWX WB
              0xf0000000  0xf0000000  0x01  0x03 RWX Bypass
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      Signed-off-by: NChris Zankel <chris@zankel.net>
      e85e335f
  22. 30 4月, 2013 1 次提交