1. 12 4月, 2013 3 次提交
  2. 11 4月, 2013 2 次提交
  3. 11 3月, 2013 1 次提交
  4. 09 3月, 2013 1 次提交
  5. 08 3月, 2013 2 次提交
    • D
      x86: Do not try to sync identity map for non-mapped pages · 60f583d5
      Dave Hansen 提交于
      kernel_map_sync_memtype() is called from a variety of contexts.  The
      pat.c code that calls it seems to ensure that it is not called for
      non-ram areas by checking via pat_pagerange_is_ram().  It is important
      that it only be called on the actual identity map because there *IS*
      no map to sync for highmem pages, or for memory holes.
      
      The ioremap.c uses are not as careful as those from pat.c, and call
      kernel_map_sync_memtype() on PCI space which is in the middle of the
      kernel identity map _range_, but is not actually mapped.
      
      This patch adds a check to kernel_map_sync_memtype() which probably
      duplicates some of the checks already in pat.c.  But, it is necessary
      for the ioremap.c uses and shouldn't hurt other callers.
      
      I have reproduced this bug and this patch fixes it for me and the
      original bug reporter:
      
      	https://lkml.org/lkml/2013/2/5/396Signed-off-by: NDave Hansen <dave@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/r/20130307163151.D9B58C4E@kernel.stglabs.ibm.comSigned-off-by: NDave Hansen <dave@sr71.net>
      Tested-by: NTetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      60f583d5
    • I
      ARM: 7668/1: fix memset-related crashes caused by recent GCC (4.7.2) optimizations · 455bd4c4
      Ivan Djelic 提交于
      Recent GCC versions (e.g. GCC-4.7.2) perform optimizations based on
      assumptions about the implementation of memset and similar functions.
      The current ARM optimized memset code does not return the value of
      its first argument, as is usually expected from standard implementations.
      
      For instance in the following function:
      
      void debug_mutex_lock_common(struct mutex *lock, struct mutex_waiter *waiter)
      {
      	memset(waiter, MUTEX_DEBUG_INIT, sizeof(*waiter));
      	waiter->magic = waiter;
      	INIT_LIST_HEAD(&waiter->list);
      }
      
      compiled as:
      
      800554d0 <debug_mutex_lock_common>:
      800554d0:       e92d4008        push    {r3, lr}
      800554d4:       e1a00001        mov     r0, r1
      800554d8:       e3a02010        mov     r2, #16 ; 0x10
      800554dc:       e3a01011        mov     r1, #17 ; 0x11
      800554e0:       eb04426e        bl      80165ea0 <memset>
      800554e4:       e1a03000        mov     r3, r0
      800554e8:       e583000c        str     r0, [r3, #12]
      800554ec:       e5830000        str     r0, [r3]
      800554f0:       e5830004        str     r0, [r3, #4]
      800554f4:       e8bd8008        pop     {r3, pc}
      
      GCC assumes memset returns the value of pointer 'waiter' in register r0; causing
      register/memory corruptions.
      
      This patch fixes the return value of the assembly version of memset.
      It adds a 'mov' instruction and merges an additional load+store into
      existing load/store instructions.
      For ease of review, here is a breakdown of the patch into 4 simple steps:
      
      Step 1
      ======
      Perform the following substitutions:
      ip -> r8, then
      r0 -> ip,
      and insert 'mov ip, r0' as the first statement of the function.
      At this point, we have a memset() implementation returning the proper result,
      but corrupting r8 on some paths (the ones that were using ip).
      
      Step 2
      ======
      Make sure r8 is saved and restored when (! CALGN(1)+0) == 1:
      
      save r8:
      -       str     lr, [sp, #-4]!
      +       stmfd   sp!, {r8, lr}
      
      and restore r8 on both exit paths:
      -       ldmeqfd sp!, {pc}               @ Now <64 bytes to go.
      +       ldmeqfd sp!, {r8, pc}           @ Now <64 bytes to go.
      (...)
              tst     r2, #16
              stmneia ip!, {r1, r3, r8, lr}
      -       ldr     lr, [sp], #4
      +       ldmfd   sp!, {r8, lr}
      
      Step 3
      ======
      Make sure r8 is saved and restored when (! CALGN(1)+0) == 0:
      
      save r8:
      -       stmfd   sp!, {r4-r7, lr}
      +       stmfd   sp!, {r4-r8, lr}
      
      and restore r8 on both exit paths:
              bgt     3b
      -       ldmeqfd sp!, {r4-r7, pc}
      +       ldmeqfd sp!, {r4-r8, pc}
      (...)
              tst     r2, #16
              stmneia ip!, {r4-r7}
      -       ldmfd   sp!, {r4-r7, lr}
      +       ldmfd   sp!, {r4-r8, lr}
      
      Step 4
      ======
      Rewrite register list "r4-r7, r8" as "r4-r8".
      Signed-off-by: NIvan Djelic <ivan.djelic@parrot.com>
      Reviewed-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NDirk Behme <dirk.behme@gmail.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      455bd4c4
  6. 07 3月, 2013 6 次提交
  7. 06 3月, 2013 1 次提交
  8. 05 3月, 2013 10 次提交
  9. 04 3月, 2013 14 次提交