1. 30 11月, 2006 1 次提交
  2. 23 11月, 2006 1 次提交
  3. 08 11月, 2006 1 次提交
  4. 03 11月, 2006 1 次提交
  5. 09 10月, 2006 1 次提交
  6. 30 9月, 2006 3 次提交
    • R
      [ARM] Fix XIP_KERNEL build error in arch/arm/mm/mmu.c · 6ae5a6ef
      Russell King 提交于
      XIP kernels need to know the start/end of text, but we were
      missing the declaration of _etext in mmu.c.  Add it.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      6ae5a6ef
    • S
      [PATCH] pidspace: is_init() · f400e198
      Sukadev Bhattiprolu 提交于
      This is an updated version of Eric Biederman's is_init() patch.
      (http://lkml.org/lkml/2006/2/6/280).  It applies cleanly to 2.6.18-rc3 and
      replaces a few more instances of ->pid == 1 with is_init().
      
      Further, is_init() checks pid and thus removes dependency on Eric's other
      patches for now.
      
      Eric's original description:
      
      	There are a lot of places in the kernel where we test for init
      	because we give it special properties.  Most  significantly init
      	must not die.  This results in code all over the kernel test
      	->pid == 1.
      
      	Introduce is_init to capture this case.
      
      	With multiple pid spaces for all of the cases affected we are
      	looking for only the first process on the system, not some other
      	process that has pid == 1.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NSukadev Bhattiprolu <sukadev@us.ibm.com>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Serge Hallyn <serue@us.ibm.com>
      Cc: Cedric Le Goater <clg@fr.ibm.com>
      Cc: <lxc-devel@lists.sourceforge.net>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f400e198
    • J
      [PATCH] make PROT_WRITE imply PROT_READ · df67b3da
      Jason Baron 提交于
      Make PROT_WRITE imply PROT_READ for a number of architectures which don't
      support write only in hardware.
      
      While looking at this, I noticed that some architectures which do not
      support write only mappings already take the exact same approach.  For
      example, in arch/alpha/mm/fault.c:
      
      "
              if (cause < 0) {
                      if (!(vma->vm_flags & VM_EXEC))
                              goto bad_area;
              } else if (!cause) {
                      /* Allow reads even for write-only mappings */
                      if (!(vma->vm_flags & (VM_READ | VM_WRITE)))
                              goto bad_area;
              } else {
                      if (!(vma->vm_flags & VM_WRITE))
                              goto bad_area;
              }
      "
      
      Thus, this patch brings other architectures which do not support write only
      mappings in-line and consistent with the rest.  I've verified the patch on
      ia64, x86_64 and x86.
      
      Additional discussion:
      
      Several architectures, including x86, can not support write-only mappings.
      The pte for x86 reserves a single bit for protection and its two states are
      read only or read/write.  Thus, write only is not supported in h/w.
      
      Currently, if i 'mmap' a page write-only, the first read attempt on that page
      creates a page fault and will SEGV.  That check is enforced in
      arch/blah/mm/fault.c.  However, if i first write that page it will fault in
      and the pte will be set to read/write.  Thus, any subsequent reads to the page
      will succeed.  It is this inconsistency in behavior that this patch is
      attempting to address.  Furthermore, if the page is swapped out, and then
      brought back the first read will also cause a SEGV.  Thus, any arbitrary read
      on a page can potentially result in a SEGV.
      
      According to the SuSv3 spec, "if the application requests only PROT_WRITE, the
      implementation may also allow read access." Also as mentioned, some
      archtectures, such as alpha, shown above already take the approach that i am
      suggesting.
      
      The counter-argument to this raised by Arjan, is that the kernel is enforcing
      the write only mapping the best it can given the h/w limitations.  This is
      true, however Alan Cox, and myself would argue that the inconsitency in
      behavior, that is applications can sometimes work/sometimes fails is highly
      undesireable.  If you read through the thread, i think people, came to an
      agreement on the last patch i posted, as nobody has objected to it...
      Signed-off-by: NJason Baron <jbaron@redhat.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Roman Zippel <zippel@linux-m68k.org>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NAndi Kleen <ak@muc.de>
      Acked-by: NAlan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Acked-by: NPaul Mundt <lethal@linux-sh.org>
      Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp>
      Cc: Ian Molton <spyro@f2s.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      df67b3da
  7. 29 9月, 2006 2 次提交
  8. 28 9月, 2006 8 次提交
  9. 27 9月, 2006 7 次提交
  10. 26 9月, 2006 1 次提交
    • D
      [PATCH] Standardize pxx_page macros · 46a82b2d
      Dave McCracken 提交于
      One of the changes necessary for shared page tables is to standardize the
      pxx_page macros.  pte_page and pmd_page have always returned the struct
      page associated with their entry, while pte_page_kernel and pmd_page_kernel
      have returned the kernel virtual address.  pud_page and pgd_page, on the
      other hand, return the kernel virtual address.
      
      Shared page tables needs pud_page and pgd_page to return the actual page
      structures.  There are very few actual users of these functions, so it is
      simple to standardize their usage.
      
      Since this is basic cleanup, I am submitting these changes as a standalone
      patch.  Per Hugh Dickins' comments about it, I am also changing the
      pxx_page_kernel macros to pxx_page_vaddr to clarify their meaning.
      Signed-off-by: NDave McCracken <dmccr@us.ibm.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      46a82b2d
  11. 25 9月, 2006 4 次提交
  12. 20 9月, 2006 2 次提交
  13. 15 9月, 2006 1 次提交
  14. 07 9月, 2006 1 次提交
  15. 03 9月, 2006 1 次提交
    • G
      [ARM] 3762/1: Fix ptrace cache coherency bug for ARM1136 VIPT nonaliasing Harvard caches · a188ad2b
      George G. Davis 提交于
      Patch from George G. Davis
      
      Resolve ARM1136 VIPT non-aliasing cache coherency issues observed when
      using ptrace to set breakpoints and cleanup copy_{to,from}_user_page()
      while we're here as requested by Russell King because "it's also far
      too heavy on non-v6 CPUs".
      
      NOTES:
      
      1. Only access_process_vm() calls copy_{to,from}_user_page().
      2. access_process_vm() calls get_user_pages() to pin down the "page".
      3. get_user_pages() calls flush_dcache_page(page) which ensures cache
         coherency between kernel and userspace mappings of "page".  However
         flush_dcache_page(page) may not invalidate I-Cache over this range
         for all cases, specifically, I-Cache is not invalidated for the VIPT
         non-aliasing case.  So memory is consistent between kernel and user
         space mappings of "page" but I-Cache may still be hot over this
         range.  IOW, we don't have to worry about flush_cache_page() before
         memcpy().
      4. Now, for the copy_to_user_page() case, after memcpy(), we must flush
         the caches so memory is consistent with kernel cache entries and
         invalidate the I-Cache if this mm region is executable.  We don't
         need to do anything after memcpy() for the copy_from_user_page()
         case since kernel cache entries will be invalidated via the same
         process above if we access "page" again.  The flush_ptrace_access()
         function (borrowed from SPARC64 implementation) is added to handle
         cache flushing after memcpy() for the copy_to_user_page() case.
      Signed-off-by: NGeorge G. Davis <gdavis@mvista.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a188ad2b
  16. 28 8月, 2006 1 次提交
  17. 29 7月, 2006 3 次提交
  18. 03 7月, 2006 1 次提交