1. 04 7月, 2008 1 次提交
  2. 22 6月, 2008 2 次提交
  3. 21 6月, 2008 1 次提交
    • L
      Reinstate ZERO_PAGE optimization in 'get_user_pages()' and fix XIP · 89f5b7da
      Linus Torvalds 提交于
      KAMEZAWA Hiroyuki and Oleg Nesterov point out that since the commit
      557ed1fa ("remove ZERO_PAGE") removed
      the ZERO_PAGE from the VM mappings, any users of get_user_pages() will
      generally now populate the VM with real empty pages needlessly.
      
      We used to get the ZERO_PAGE when we did the "handle_mm_fault()", but
      since fault handling no longer uses ZERO_PAGE for new anonymous pages,
      we now need to handle that special case in follow_page() instead.
      
      In particular, the removal of ZERO_PAGE effectively removed the core
      file writing optimization where we would skip writing pages that had not
      been populated at all, and increased memory pressure a lot by allocating
      all those useless newly zeroed pages.
      
      This reinstates the optimization by making the unmapped PTE case the
      same as for a non-existent page table, which already did this correctly.
      
      While at it, this also fixes the XIP case for follow_page(), where the
      caller could not differentiate between the case of a page that simply
      could not be used (because it had no "struct page" associated with it)
      and a page that just wasn't mapped.
      
      We do that by simply returning an error pointer for pages that could not
      be turned into a "struct page *".  The error is arbitrarily picked to be
      EFAULT, since that was what get_user_pages() already used for the
      equivalent IO-mapped page case.
      
      [ Also removed an impossible test for pte_offset_map_lock() failing:
        that's not how that function works ]
      Acked-by: NOleg Nesterov <oleg@tv-sign.ru>
      Acked-by: NNick Piggin <npiggin@suse.de>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Roland McGrath <roland@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      89f5b7da
  4. 13 6月, 2008 2 次提交
  5. 12 6月, 2008 1 次提交
    • P
      nommu: Correct kobjsize() page validity checks. · 5a1603be
      Paul Mundt 提交于
      This implements a few changes on top of the recent kobjsize() refactoring
      introduced by commit 6cfd53fc.
      
      As Christoph points out:
      
      	virt_to_head_page cannot return NULL. virt_to_page also
      	does not return NULL. pfn_valid() needs to be used to
      	figure out if a page is valid.  Otherwise the page struct
      	reference that was returned may have PageReserved() set
      	to indicate that it is not a valid page.
      
      As discussed further in the thread, virt_addr_valid() is the preferable
      way to validate the object pointer in this case. In addition to fixing
      up the reserved page case, it also has the benefit of encapsulating the
      hack introduced by commit 4016a139 on
      the impacted platforms, allowing us to get rid of the extra checking in
      kobjsize() for the platforms that don't perform this type of bizarre
      memory_end abuse (every nommu platform that isn't blackfin). If blackfin
      decides to get in line with every other platform and use PageReserved
      for the DMA pages in question, kobjsize() will also continue to work
      fine.
      
      It also turns out that compound_order() will give us back 0-order for
      non-head pages, so we can get rid of the PageCompound check and just
      use compound_order() directly. Clean that up while we're at it.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      Reviewed-by: NChristoph Lameter <clameter@sgi.com>
      Acked-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5a1603be
  6. 10 6月, 2008 1 次提交
  7. 07 6月, 2008 3 次提交
  8. 25 5月, 2008 5 次提交
    • H
      memory hotplug: fix early allocation handling · cd94b9db
      Heiko Carstens 提交于
      Trying to add memory via add_memory() from within an initcall function
      results in
      
      bootmem alloc of 163840 bytes failed!
      Kernel panic - not syncing: Out of memory
      
      This is caused by zone_wait_table_init() which uses system_state to decide
      if it should use the bootmem allocator or not.
      
      When initcalls are handled the system_state is still SYSTEM_BOOTING but
      the bootmem allocator doesn't work anymore.  So the allocation will fail.
      
      To fix this use slab_is_available() instead as indicator like we do it
      everywhere else.
      
      [akpm@linux-foundation.org: coding-style fix]
      Reviewed-by: NAndy Whitcroft <apw@shadowen.org>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NYasunori Goto <y-goto@jp.fujitsu.com>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cd94b9db
    • A
      zonelists: handle a node zonelist with no applicable entries · 7eb54824
      Andy Whitcroft 提交于
      When booting 2.6.26-rc3 on a multi-node x86_32 numa system we are seeing
      panics when trying node local allocations:
      
       BUG: unable to handle kernel NULL pointer dereference at 0000034c
       IP: [<c1042507>] get_page_from_freelist+0x4a/0x18e
       *pdpt = 00000000013a7001 *pde = 0000000000000000
       Oops: 0000 [#1] SMP
       Modules linked in:
      
       Pid: 0, comm: swapper Not tainted (2.6.26-rc3-00003-g5abc28d #82)
       EIP: 0060:[<c1042507>] EFLAGS: 00010282 CPU: 0
       EIP is at get_page_from_freelist+0x4a/0x18e
       EAX: c1371ed8 EBX: 00000000 ECX: 00000000 EDX: 00000000
       ESI: f7801180 EDI: 00000000 EBP: 00000000 ESP: c1371ec0
        DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068
       Process swapper (pid: 0, ti=c1370000 task=c12f5b40 task.ti=c1370000)
       Stack: 00000000 00000000 00000000 00000000 000612d0 000412d0 00000000 000412d0
              f7801180 f7c0101c f7c01018 c10426e4 f7c01018 00000001 00000044 00000000
              00000001 c12f5b40 00000001 00000010 00000000 000412d0 00000286 000412d0
       Call Trace:
        [<c10426e4>] __alloc_pages_internal+0x99/0x378
        [<c10429ca>] __alloc_pages+0x7/0x9
        [<c105e0e8>] kmem_getpages+0x66/0xef
        [<c105ec55>] cache_grow+0x8f/0x123
        [<c105f117>] ____cache_alloc_node+0xb9/0xe4
        [<c105f427>] kmem_cache_alloc_node+0x92/0xd2
        [<c122118c>] setup_cpu_cache+0xaf/0x177
        [<c105e6ca>] kmem_cache_create+0x2c8/0x353
        [<c13853af>] kmem_cache_init+0x1ce/0x3ad
        [<c13755c5>] start_kernel+0x178/0x1ee
      
      This occurs when we are scanning the zonelists looking for a ZONE_NORMAL
      page.  In this system there is only ZONE_DMA and ZONE_NORMAL memory on
      node 0, all other nodes are mapped above 4GB physical.  Here is a dump
      of the zonelists from this system:
      
          zonelists pgdat=c1400000
           0: c14006c0:2 f7c006c0:2 f7e006c0:2 c1400360:1 c1400000:0
           1: c14006c0:2 c1400360:1 c1400000:0
          zonelists pgdat=f7c00000
           0: f7c006c0:2 f7e006c0:2 c14006c0:2 c1400360:1 c1400000:0
           1: f7c006c0:2
          zonelists pgdat=f7e00000
           0: f7e006c0:2 c14006c0:2 f7c006c0:2 c1400360:1 c1400000:0
           1: f7e006c0:2
      
      When performing a node local allocation we call get_page_from_freelist()
      looking for a page.  It in turn calls first_zones_zonelist() which returns
      a preferred_zone.  Where there are no applicable zones this will be NULL.
      However we use this unconditionally, leading to this panic.
      
      Where there are no applicable zones there is no possibility of a successful
      allocation, so simply fail the allocation.
      Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7eb54824
    • A
      mm: fix atomic_t overflow in vm · 80119ef5
      Alan Cox 提交于
      The atomic_t type is 32bit but a 64bit system can have more than 2^32
      pages of virtual address space available.  Without this we overflow on
      ludicrously large mappings
      Signed-off-by: NAlan Cox <alan@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      80119ef5
    • J
      mm: don't drop a partial page in a zone's memory map size · f7232154
      Johannes Weiner 提交于
      In a zone's present pages number, account for all pages occupied by the
      memory map, including a partial.
      Signed-off-by: NJohannes Weiner <hannes@saeurebad.de>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f7232154
    • N
      mm: allow pfnmap ->fault()s · 42172d75
      Nick Piggin 提交于
      Take out an assertion to allow ->fault handlers to service PFNMAP regions.
      This is required to reimplement .nopfn handlers with .fault handlers and
      subsequently remove nopfn.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NJes Sorensen <jes@sgi.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      42172d75
  9. 23 5月, 2008 1 次提交
  10. 21 5月, 2008 1 次提交
    • G
      mm: bdi: fix race in bdi_class device creation · 19051c50
      Greg Kroah-Hartman 提交于
      There is a race from when a device is created with device_create() and
      then the drvdata is set with a call to dev_set_drvdata() in which a
      sysfs file could be open, yet the drvdata will be NULL, causing all
      sorts of bad things to happen.
      
      This patch fixes the problem by using the new function,
      device_create_vargs().
      
      Many thanks to Arthur Jones <ajones@riverbed.com> for reporting the bug,
      and testing patches out.
      
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Cc: Arthur Jones <ajones@riverbed.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Miklos Szeredi <mszeredi@suse.cz>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      19051c50
  11. 20 5月, 2008 1 次提交
  12. 15 5月, 2008 6 次提交
    • H
      memory_hotplug: always initialize pageblock bitmap · 76cdd58e
      Heiko Carstens 提交于
      Trying to online a new memory section that was added via memory hotplug
      sometimes results in crashes when the new pages are added via __free_page.
       Reason for that is that the pageblock bitmap isn't initialized and hence
      contains random stuff.  That means that get_pageblock_migratetype()
      returns also random stuff and therefore
      
      	list_add(&page->lru,
      		&zone->free_area[order].free_list[migratetype]);
      
      in __free_one_page() tries to do a list_add to something that isn't even
      necessarily a list.
      
      This happens since 86051ca5 ("mm: fix
      usemap initialization") which makes sure that the pageblock bitmap gets
      only initialized for pages present in a zone.  Unfortunately for hot-added
      memory the zones "grow" after the memmap and the pageblock memmap have
      been initialized.  Which means that the new pages have an unitialized
      bitmap.  To solve this the calls to grow_zone_span() and grow_pgdat_span()
      are moved to __add_zone() just before the initialization happens.
      
      The patch also moves the two functions since __add_zone() is the only
      caller and I didn't want to add a forward declaration.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      76cdd58e
    • V
      mprotect: prevent alteration of the PAT bits · 1c12c4cf
      Venki Pallipadi 提交于
      There is a defect in mprotect, which lets the user change the page cache
      type bits by-passing the kernel reserve_memtype and free_memtype
      wrappers.  Fix the problem by not letting mprotect change the PAT bits.
      Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1c12c4cf
    • G
      memory_hotplug: check for walk_memory_resource() failure in online_pages() · fd8a4221
      Geoff Levand 提交于
      Add a check to online_pages() to test for failure of
      walk_memory_resource().  This fixes a condition where a failure
      of walk_memory_resource() can lead to online_pages() returning
      success without the requested pages being onlined.
      Signed-off-by: NGeoff Levand <geoffrey.levand@am.sony.com>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Cc: Keith Mannthey <kmannth@us.ibm.com>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: Paul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fd8a4221
    • H
      memory hotplug: memmap_init_zone called twice · c3723ca3
      Heiko Carstens 提交于
      __add_zone calls memmap_init_zone twice if memory gets attached to an empty
      zone.  Once via init_currently_empty_zone and once explictly right after that
      call.
      
      Looks like this is currently not a bug, however the call is superfluous and
      might lead to subtle bugs if memmap_init_zone gets changed.  So make sure it
      is called only once.
      
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c3723ca3
    • M
      mm: fix infinite loop in filemap_fault · 3ef0f720
      Miklos Szeredi 提交于
      filemap_fault will go into an infinite loop if ->readpage() fails
      asynchronously.
      
      AFAICS the bug was introduced by this commit, which removed the wait after the
      final readpage:
      
         commit d00806b1
         Author: Nick Piggin <npiggin@suse.de>
         Date:   Thu Jul 19 01:46:57 2007 -0700
      
             mm: fix fault vs invalidate race for linear mappings
      
      Fix by reintroducing the wait_on_page_locked() after ->readpage() to make sure
      the page is up-to-date before jumping back to the beginning of the function.
      
      I've noticed this while testing nfs exporting on fuse.  The patch
      fixes it.
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      Cc: Nick Piggin <npiggin@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3ef0f720
    • N
      fix SMP data race in pagetable setup vs walking · 362a61ad
      Nick Piggin 提交于
      There is a possible data race in the page table walking code. After the split
      ptlock patches, it actually seems to have been introduced to the core code, but
      even before that I think it would have impacted some architectures (powerpc
      and sparc64, at least, walk the page tables without taking locks eg. see
      find_linux_pte()).
      
      The race is as follows:
      The pte page is allocated, zeroed, and its struct page gets its spinlock
      initialized. The mm-wide ptl is then taken, and then the pte page is inserted
      into the pagetables.
      
      At this point, the spinlock is not guaranteed to have ordered the previous
      stores to initialize the pte page with the subsequent store to put it in the
      page tables. So another Linux page table walker might be walking down (without
      any locks, because we have split-leaf-ptls), and find that new pte we've
      inserted. It might try to take the spinlock before the store from the other
      CPU initializes it. And subsequently it might read a pte_t out before stores
      from the other CPU have cleared the memory.
      
      There are also similar races in higher levels of the page tables. They
      obviously don't involve the spinlock, but could see uninitialized memory.
      
      Arch code and hardware pagetable walkers that walk the pagetables without
      locks could see similar uninitialized memory problems, regardless of whether
      split ptes are enabled or not.
      
      I prefer to put the barriers in core code, because that's where the higher
      level logic happens, but the page table accessors are per-arch, and open-coding
      them everywhere I don't think is an option. I'll put the read-side barriers
      in alpha arch code for now (other architectures perform data-dependent loads
      in order).
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      362a61ad
  13. 13 5月, 2008 2 次提交
  14. 09 5月, 2008 1 次提交
  15. 07 5月, 2008 2 次提交
    • M
      vfs: splice remove_suid() cleanup · 7f3d4ee1
      Miklos Szeredi 提交于
      generic_file_splice_write() duplicates remove_suid() just because it
      doesn't hold i_mutex.  But it grabs i_mutex inside splice_from_pipe()
      anyway, so this is rather pointless.
      
      Move locking to generic_file_splice_write() and call remove_suid() and
      __splice_from_pipe() instead.
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      7f3d4ee1
    • H
      x86: fix PAE pmd_bad bootup warning · aeed5fce
      Hugh Dickins 提交于
      Fix warning from pmd_bad() at bootup on a HIGHMEM64G HIGHPTE x86_32.
      
      That came from 9fc34113 x86: debug pmd_bad();
      but we understand now that the typecasting was wrong for PAE in the previous
      version: pagetable pages above 4GB looked bad and stopped Arjan from booting.
      
      And revert that cded932b x86: fix pmd_bad
      and pud_bad to support huge pages.  It was the wrong way round: we shouldn't
      weaken every pmd_bad and pud_bad check to let huge pages slip through - in
      part they check that we _don't_ have a huge page where it's not expected.
      
      Put the x86 pmd_bad() and pud_bad() definitions back to what they have long
      been: they can be improved (x86_32 should use PTE_MASK, to stop PAE thinking
      junk in the upper word is good; and x86_64 should follow x86_32's stricter
      comparison, to stop thinking any subset of required bits is good); but that
      should be a later patch.
      
      Fix Hans' good observation that follow_page() will never find pmd_huge()
      because that would have already failed the pmd_bad test: test pmd_huge in
      between the pmd_none and pmd_bad tests.  Tighten x86's pmd_huge() check?
      No, once it's a hugepage entry, it can get quite far from a good pmd: for
      example, PROT_NONE leaves it with only ACCESSED of the KERN_PGTABLE bits.
      
      However... though follow_page() contains this and another test for huge
      pages, so it's nice to keep it working on them, where does it actually get
      called on a huge page?  get_user_pages() checks is_vm_hugetlb_page(vma) to
      to call alternative hugetlb processing, as does unmap_vmas() and others.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Earlier-version-tested-by: NIngo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Jeff Chua <jeff.chua.linux@gmail.com>
      Cc: Hans Rosenfeld <hans.rosenfeld@amd.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      aeed5fce
  16. 02 5月, 2008 2 次提交
  17. 01 5月, 2008 3 次提交
  18. 30 4月, 2008 5 次提交
    • A
      revert "memory hotplug: allocate usemap on the section with pgdat" · 51674644
      Andrew Morton 提交于
      This:
      
      commit 86f6dae1
      Author: Yasunori Goto <y-goto@jp.fujitsu.com>
      Date:   Mon Apr 28 02:13:33 2008 -0700
      
          memory hotplug: allocate usemap on the section with pgdat
      
          Usemaps are allocated on the section which has pgdat by this.
      
          Because usemap size is very small, many other sections usemaps are allocated
          on only one page.  If a section has usemap, it can't be removed until removing
          other sections.  This dependency is not desirable for memory removing.
      
          Pgdat has similar feature.  When a section has pgdat area, it must be the last
          section for removing on the node.  So, if section A has pgdat and section B
          has usemap for section A, Both sections can't be removed due to dependency
          each other.
      
          To solve this issue, this patch collects usemap on same section with pgdat.
          If other sections doesn't have any dependency, this section will be able to be
          removed finally.
      Signed-off-by: NYasunori Goto <y-goto@jp.fujitsu.com>
          Cc: Badari Pulavarty <pbadari@us.ibm.com>
          Cc: Yinghai Lu <yhlu.kernel@gmail.com>
          Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      
      broke davem's sparc64 bootup.  Revert it while we work out what went wrong.
      
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Badari Pulavarty <pbadari@us.ibm.com>
      Cc: Yinghai Lu <yhlu.kernel@gmail.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      51674644
    • N
      mm: fix warning on memory offline · 3a902c5f
      Nick Piggin 提交于
      KAMEZAWA Hiroyuki found a warning message in the buffer dirtying code that
      is coming from page migration caller.
      
      WARNING: at fs/buffer.c:720 __set_page_dirty+0x330/0x360()
      Call Trace:
       [<a000000100015220>] show_stack+0x80/0xa0
       [<a000000100015270>] dump_stack+0x30/0x60
       [<a000000100089ed0>] warn_on_slowpath+0x90/0xe0
       [<a0000001001f8b10>] __set_page_dirty+0x330/0x360
       [<a0000001001ffb90>] __set_page_dirty_buffers+0xd0/0x280
       [<a00000010012fec0>] set_page_dirty+0xc0/0x260
       [<a000000100195670>] migrate_page_copy+0x5d0/0x5e0
       [<a000000100197840>] buffer_migrate_page+0x2e0/0x3c0
       [<a000000100195eb0>] migrate_pages+0x770/0xe00
      
      What was happening is that migrate_page_copy wants to transfer the PG_dirty
      bit from old page to new page, so what it would do is set_page_dirty(newpage).
      However set_page_dirty() is used to set the entire page dirty, wheras in
      this case, only part of the page was dirty, and it also was not uptodate.
      
      Marking the whole page dirty with set_page_dirty would lead to corruption or
      unresolvable conditions -- a dirty && !uptodate page and dirty && !uptodate
      buffers.
      
      Possibly we could just ClearPageDirty(oldpage); SetPageDirty(newpage);
      however in the interests of keeping the change minimal...
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Tested-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3a902c5f
    • H
      mm: remove remaining __FUNCTION__ occurrences · d40cee24
      Harvey Harrison 提交于
      __FUNCTION__ is gcc-specific, use __func__
      Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d40cee24
    • T
      infrastructure to debug (dynamic) objects · 3ac7fe5a
      Thomas Gleixner 提交于
      We can see an ever repeating problem pattern with objects of any kind in the
      kernel:
      
      1) freeing of active objects
      2) reinitialization of active objects
      
      Both problems can be hard to debug because the crash happens at a point where
      we have no chance to decode the root cause anymore.  One problem spot are
      kernel timers, where the detection of the problem often happens in interrupt
      context and usually causes the machine to panic.
      
      While working on a timer related bug report I had to hack specialized code
      into the timer subsystem to get a reasonable hint for the root cause.  This
      debug hack was fine for temporary use, but far from a mergeable solution due
      to the intrusiveness into the timer code.
      
      The code further lacked the ability to detect and report the root cause
      instantly and keep the system operational.
      
      Keeping the system operational is important to get hold of the debug
      information without special debugging aids like serial consoles and special
      knowledge of the bug reporter.
      
      The problems described above are not restricted to timers, but timers tend to
      expose it usually in a full system crash.  Other objects are less explosive,
      but the symptoms caused by such mistakes can be even harder to debug.
      
      Instead of creating specialized debugging code for the timer subsystem a
      generic infrastructure is created which allows developers to verify their code
      and provides an easy to enable debug facility for users in case of trouble.
      
      The debugobjects core code keeps track of operations on static and dynamic
      objects by inserting them into a hashed list and sanity checking them on
      object operations and provides additional checks whenever kernel memory is
      freed.
      
      The tracked object operations are:
      - initializing an object
      - adding an object to a subsystem list
      - deleting an object from a subsystem list
      
      Each operation is sanity checked before the operation is executed and the
      subsystem specific code can provide a fixup function which allows to prevent
      the damage of the operation.  When the sanity check triggers a warning message
      and a stack trace is printed.
      
      The list of operations can be extended if the need arises.  For now it's
      limited to the requirements of the first user (timers).
      
      The core code enqueues the objects into hash buckets.  The hash index is
      generated from the address of the object to simplify the lookup for the check
      on kfree/vfree.  Each bucket has it's own spinlock to avoid contention on a
      global lock.
      
      The debug code can be compiled in without being active.  The runtime overhead
      is minimal and could be optimized by asm alternatives.  A kernel command line
      option enables the debugging code.
      
      Thanks to Ingo Molnar for review, suggestions and cleanup patches.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: Greg KH <greg@kroah.com>
      Cc: Randy Dunlap <randy.dunlap@oracle.com>
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3ac7fe5a
    • M
      mm: Add NR_WRITEBACK_TEMP counter · fc3ba692
      Miklos Szeredi 提交于
      Fuse will use temporary buffers to write back dirty data from memory mappings
      (normal writes are done synchronously).  This is needed, because there cannot
      be any guarantee about the time in which a write will complete.
      
      By using temporary buffers, from the MM's point if view the page is written
      back immediately.  If the writeout was due to memory pressure, this
      effectively migrates data from a full zone to a less full zone.
      
      This patch adds a new counter (NR_WRITEBACK_TEMP) for the number of pages used
      as temporary buffers.
      
      [Lee.Schermerhorn@hp.com: add vmstat_text for NR_WRITEBACK_TEMP]
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      Cc: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fc3ba692