1. 11 4月, 2016 4 次提交
  2. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  3. 29 3月, 2016 3 次提交
  4. 26 3月, 2016 1 次提交
  5. 23 3月, 2016 2 次提交
  6. 22 3月, 2016 3 次提交
  7. 18 3月, 2016 4 次提交
  8. 16 3月, 2016 4 次提交
    • C
      powerpc: Fix unrecoverable SLB miss during restore_math() · 6e669f08
      Cyril Bur 提交于
      Commit 70fe3d98 "powerpc: Restore FPU/VEC/VSX if previously used" introduces a
      call to restore_math() late in the syscall return path, after MSR_RI has been
      cleared. The MSR_RI flag is used to indicate whether the kernel can take
      another exception or not. A cleared MSR_RI flag indicates that the kernel
      cannot.
      
      Unfortunately when a machine is under SLB pressure an SLB miss can occur
      in restore_math() which (with MSR_RI cleared) leads to an unrecoverable
      exception.
      
        Unrecoverable exception 4100 at c0000000000088d8
        cpu 0x0: Vector: 4100  at [c0000003fa473b20]
            pc: c0000000000088d8: .load_vr_state+0x70/0x110
            lr: c00000000000f710: .restore_math+0x130/0x188
            sp: c0000003fa473da0
           msr: 9000000002003030
          current = 0xc0000007f876f180
          paca    = 0xc00000000fff0000	 softe: 0	 irq_happened: 0x01
            pid   = 1944, comm = K08umountfs
        [link register   ] c00000000000f710 .restore_math+0x130/0x188
        [c0000003fa473da0] c0000003fa473e30 (unreliable)
        [c0000003fa473e30] c000000000007b6c system_call+0x84/0xfc
      
      The clearing of MSR_RI is actually an optimisation to avoid multiple MSR
      writes, what must be disabled are interrupts. See comment in entry_64.S:
      
        /*
         * For performance reasons we clear RI the same time that we
         * clear EE. We only need to clear RI just before we restore r13
         * below, but batching it with EE saves us one expensive mtmsrd call.
         * We have to be careful to restore RI if we branch anywhere from
         * here (eg syscall_exit_work).
         */
      
      At the point of calling restore_math() r13 has not been restored, as such, the
      quick fix of turning MSR_RI back on for the call to restore_math() will
      eliminate the occurrence of an unrecoverable exception.
      
      We'd like to do a better fix in future.
      
      Fixes: 70fe3d98 ("powerpc: Restore FPU/VEC/VSX if previously used")
      Signed-off-by: NCyril Bur <cyrilbur@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      6e669f08
    • C
      powerpc/8xx: Fix do_mtspr_cpu6() build on older compilers · 2e098dce
      Christophe Leroy 提交于
      GCC < 4.9 is unable to build this, saying:
      
        arch/powerpc/mm/8xx_mmu.c:139:2: error: memory input 1 is not directly addressable
      
      Change the one-element array into a simple variable to avoid this.
      
      Fixes: 1458dd95 ("powerpc/8xx: Handle CPU6 ERRATA directly in mtspr() macro")
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Cc: Scott Wood <oss@buserror.net>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      2e098dce
    • M
      powerpc/rcpm: Fix build break when SMP=n · b081251e
      Michael Ellerman 提交于
      Add an include of asm/smp.h to fix a build break when SMP=n:
      
        arch/powerpc/sysdev/fsl_rcpm.c:32:2: error: implicit declaration of
        function 'get_hard_smp_processor_id'
      
      Fixes: d17799f9 ("powerpc/rcpm: add RCPM driver")
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      b081251e
    • S
      powerpc/book3e-64: Use hardcoded mttmr opcode · 7a25d912
      Scott Wood 提交于
      This preserves the ability to build using older binutils (reportedly <=
      2.22).
      
      Fixes: 6becef7e ("powerpc/mpc85xx: Add CPU hotplug support for E6500")
      Signed-off-by: NScott Wood <oss@buserror.net>
      Cc: chenhui.zhao@freescale.com
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      7a25d912
  9. 12 3月, 2016 18 次提交