1. 20 6月, 2008 1 次提交
  2. 18 6月, 2008 3 次提交
    • P
      [POWERPC] Clear sub-page HPTE present bits when demoting page size · 65ba6cdc
      Paul Mackerras 提交于
      When we demote a slice from 64k to 4k, and we are about to insert an
      HPTE for a 4k subpage and we notice that there is an existing 64k
      HPTE, we first invalidate that HPTE before inserting the new 4k
      subpage HPTE.  Since the bits that encode which hash bucket the old
      HPTE was in overlap with the bits that encode which of the 16 subpages
      have HPTEs, we need to clear out the subpage HPTE-present bits before
      starting to insert HPTEs for the 4k subpages.  If we don't do that, we
      can erroneously think that a subpage already has an HPTE when it
      doesn't.
      
      That in itself wouldn't be such a problem except that when we go to
      update the HPTE that we think is present on machines with a
      hypervisor, the hypervisor can tell us that the HPTE we think is there
      is actually there even though it isn't, which can lead to a process
      getting stuck in a loop, continually faulting.  The reason for the
      confusion is that the AVPN (abbreviated virtual page number) we are
      looking for in the HPTE for a 4k subpage can actually match the AVPN
      in a stale HPTE for another 64k page.  For example, the HPTE for
      the 4k subpage at 0x84000f000 will be in the same hash bucket and have
      the same AVPN as the HPTE for the 64k page at 0x8400f0000.
      
      This fixes the code to clear out the subpage HPTE-present bits.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      65ba6cdc
    • J
      [POWERPC] 4xx: Clear new TLB cache attribute bits in Data Storage vector · b17879f7
      Josh Boyer 提交于
      A recent commit added support for the new 440x6 and 464 cores that have the
      added WL1, IL1I, IL1D, IL2I, and ILD2 bits for the caching attributes in the
      TLBs.  The new bits were cleared in the finish_tlb_load function, however a
      similar bit of code was missed in the DataStorage interrupt vector.
      Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      b17879f7
    • L
      x86-64: Fix "bytes left to copy" return value for copy_from_user() · 42a886af
      Linus Torvalds 提交于
      Most users by far do not care about the exact return value (they only
      really care about whether the copy succeeded in its entirety or not),
      but a few special core routines actually care deeply about exactly how
      many bytes were copied from user space.
      
      And the unrolled versions of the x86-64 user copy routines would
      sometimes report that it had copied more bytes than it actually had.
      
      Very few uses actually have partial copies to begin with, but to make
      this bug even harder to trigger, most x86 CPU's use the "rep string"
      instructions for normal user copies, and that version didn't have this
      issue.
      
      To make it even harder to hit, the one user of this that really cared
      about the return value (and used the uncached version of the copy that
      doesn't use the "rep string" instructions) was the generic write
      routine, which pre-populated its source, once more hiding the problem by
      avoiding the exception case that triggers the bug.
      
      In other words, very special thanks to Bron Gondwana who not only
      triggered this, but created a test-program to show it, and bisected the
      behavior down to commit 08291429 ("mm:
      fix pagecache write deadlocks") which changed the access pattern just
      enough that you can now trigger it with 'writev()' with multiple
      iovec's.
      
      That commit itself was not the cause of the bug, it just allowed all the
      stars to align just right that you could trigger the problem.
      
      [ Side note: this is just the minimal fix to make the copy routines
        (with __copy_from_user_inatomic_nocache as the particular version that
        was involved in showing this) have the right return values.
      
        We really should improve on the exceptional case further - to make the
        copy do a byte-accurate copy up to the exact page limit that causes it
        to fail.  As it is, the callers have to do extra work to handle the
        limit case gracefully. ]
      Reported-by: NBron Gondwana <brong@fastmail.fm>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Al Viro <viro@ZenIV.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      
       (which didn't have this problem), and since
      most users that do the carethis was very hard to trigger, but
      42a886af
  3. 17 6月, 2008 2 次提交
  4. 16 6月, 2008 28 次提交
  5. 13 6月, 2008 6 次提交