1. 21 2月, 2014 1 次提交
  2. 17 2月, 2014 2 次提交
  3. 11 2月, 2014 3 次提交
  4. 10 2月, 2014 1 次提交
    • A
      fix O_SYNC|O_APPEND syncing the wrong range on write() · d311d79d
      Al Viro 提交于
      It actually goes back to 2004 ([PATCH] Concurrent O_SYNC write support)
      when sync_page_range() had been introduced; generic_file_write{,v}() correctly
      synced
      	pos_after_write - written .. pos_after_write - 1
      but generic_file_aio_write() synced
      	pos_before_write .. pos_before_write + written - 1
      instead.  Which is not the same thing with O_APPEND, obviously.
      A couple of years later correct variant had been killed off when
      everything switched to use of generic_file_aio_write().
      
      All users of generic_file_aio_write() are affected, and the same bug
      has been copied into other instances of ->aio_write().
      
      The fix is trivial; the only subtle point is that generic_write_sync()
      ought to be inlined to avoid calculations useless for the majority of
      calls.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      d311d79d
  5. 07 2月, 2014 3 次提交
    • K
      mm: __set_page_dirty_nobuffers() uses spin_lock_irqsave() instead of spin_lock_irq() · a85d9df1
      KOSAKI Motohiro 提交于
      During aio stress test, we observed the following lockdep warning.  This
      mean AIO+numa_balancing is currently deadlockable.
      
      The problem is, aio_migratepage disable interrupt, but
      __set_page_dirty_nobuffers unintentionally enable it again.
      
      Generally, all helper function should use spin_lock_irqsave() instead of
      spin_lock_irq() because they don't know caller at all.
      
         other info that might help us debug this:
          Possible unsafe locking scenario:
      
                CPU0
                ----
           lock(&(&ctx->completion_lock)->rlock);
           <Interrupt>
             lock(&(&ctx->completion_lock)->rlock);
      
          *** DEADLOCK ***
      
            dump_stack+0x19/0x1b
            print_usage_bug+0x1f7/0x208
            mark_lock+0x21d/0x2a0
            mark_held_locks+0xb9/0x140
            trace_hardirqs_on_caller+0x105/0x1d0
            trace_hardirqs_on+0xd/0x10
            _raw_spin_unlock_irq+0x2c/0x50
            __set_page_dirty_nobuffers+0x8c/0xf0
            migrate_page_copy+0x434/0x540
            aio_migratepage+0xb1/0x140
            move_to_new_page+0x7d/0x230
            migrate_pages+0x5e5/0x700
            migrate_misplaced_page+0xbc/0xf0
            do_numa_page+0x102/0x190
            handle_pte_fault+0x241/0x970
            handle_mm_fault+0x265/0x370
            __do_page_fault+0x172/0x5a0
            do_page_fault+0x1a/0x70
            page_fault+0x28/0x30
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Larry Woodman <lwoodman@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <jweiner@redhat.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a85d9df1
    • W
      mm/swap: fix race on swap_info reuse between swapoff and swapon · f893ab41
      Weijie Yang 提交于
      swapoff clear swap_info's SWP_USED flag prematurely and free its
      resources after that.  A concurrent swapon will reuse this swap_info
      while its previous resources are not cleared completely.
      
      These late freed resources are:
       - p->percpu_cluster
       - swap_cgroup_ctrl[type]
       - block_device setting
       - inode->i_flags &= ~S_SWAPFILE
      
      This patch clears the SWP_USED flag after all its resources are freed,
      so that swapon can reuse this swap_info by alloc_swap_info() safely.
      
      [akpm@linux-foundation.org: tidy up code comment]
      Signed-off-by: NWeijie Yang <weijie.yang@samsung.com>
      Acked-by: NHugh Dickins <hughd@google.com>
      Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f893ab41
    • S
      swap: add a simple detector for inappropriate swapin readahead · 579f8290
      Shaohua Li 提交于
      This is a patch to improve swap readahead algorithm.  It's from Hugh and
      I slightly changed it.
      
      Hugh's original changelog:
      
      swapin readahead does a blind readahead, whether or not the swapin is
      sequential.  This may be ok on harddisk, because large reads have
      relatively small costs, and if the readahead pages are unneeded they can
      be reclaimed easily - though, what if their allocation forced reclaim of
      useful pages? But on SSD devices large reads are more expensive than
      small ones: if the readahead pages are unneeded, reading them in caused
      significant overhead.
      
      This patch adds very simplistic random read detection.  Stealing the
      PageReadahead technique from Konstantin Khlebnikov's patch, avoiding the
      vma/anon_vma sophistications of Shaohua Li's patch, swapin_nr_pages()
      simply looks at readahead's current success rate, and narrows or widens
      its readahead window accordingly.  There is little science to its
      heuristic: it's about as stupid as can be whilst remaining effective.
      
      The table below shows elapsed times (in centiseconds) when running a
      single repetitive swapping load across a 1000MB mapping in 900MB ram
      with 1GB swap (the harddisk tests had taken painfully too long when I
      used mem=500M, but SSD shows similar results for that).
      
      Vanilla is the 3.6-rc7 kernel on which I started; Shaohua denotes his
      Sep 3 patch in mmotm and linux-next; HughOld denotes my Oct 1 patch
      which Shaohua showed to be defective; HughNew this Nov 14 patch, with
      page_cluster as usual at default of 3 (8-page reads); HughPC4 this same
      patch with page_cluster 4 (16-page reads); HughPC0 with page_cluster 0
      (1-page reads: no readahead).
      
      HDD for swapping to harddisk, SSD for swapping to VertexII SSD.  Seq for
      sequential access to the mapping, cycling five times around; Rand for
      the same number of random touches.  Anon for a MAP_PRIVATE anon mapping;
      Shmem for a MAP_SHARED anon mapping, equivalent to tmpfs.
      
      One weakness of Shaohua's vma/anon_vma approach was that it did not
      optimize Shmem: seen below.  Konstantin's approach was perhaps mistuned,
      50% slower on Seq: did not compete and is not shown below.
      
      HDD        Vanilla Shaohua HughOld HughNew HughPC4 HughPC0
      Seq Anon     73921   76210   75611   76904   78191  121542
      Seq Shmem    73601   73176   73855   72947   74543  118322
      Rand Anon   895392  831243  871569  845197  846496  841680
      Rand Shmem 1058375 1053486  827935  764955  764376  756489
      
      SSD        Vanilla Shaohua HughOld HughNew HughPC4 HughPC0
      Seq Anon     24634   24198   24673   25107   21614   70018
      Seq Shmem    24959   24932   25052   25703   22030   69678
      Rand Anon    43014   26146   28075   25989   26935   25901
      Rand Shmem   45349   45215   28249   24268   24138   24332
      
      These tests are, of course, two extremes of a very simple case: under
      heavier mixed loads I've not yet observed any consistent improvement or
      degradation, and wider testing would be welcome.
      
      Shaohua Li:
      
      Test shows Vanilla is slightly better in sequential workload than Hugh's
      patch.  I observed with Hugh's patch sometimes the readahead size is
      shrinked too fast (from 8 to 1 immediately) in sequential workload if
      there is no hit.  And in such case, continuing doing readahead is good
      actually.
      
      I don't prepare a sophisticated algorithm for the sequential workload
      because so far we can't guarantee sequential accessed pages are swap out
      sequentially.  So I slightly change Hugh's heuristic - don't shrink
      readahead size too fast.
      
      Here is my test result (unit second, 3 runs average):
      	Vanilla		Hugh		New
      Seq	356		370		360
      Random	4525		2447		2444
      
      Attached graph is the swapin/swapout throughput I collected with 'vmstat
      2'.  The first part is running a random workload (till around 1200 of
      the x-axis) and the second part is running a sequential workload.
      swapin and swapout throughput are almost identical in steady state in
      both workloads.  These are expected behavior.  while in Vanilla, swapin
      is much bigger than swapout especially in random workload (because wrong
      readahead).
      
      Original patches by: Shaohua Li and Konstantin Khlebnikov.
      
      [fengguang.wu@intel.com: swapin_nr_pages() can be static]
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NFengguang Wu <fengguang.wu@intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Konstantin Khlebnikov <khlebnikov@openvz.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      579f8290
  6. 31 1月, 2014 8 次提交
    • M
      mm: Fix warning on make htmldocs caused by slab.c · cb8ee1a3
      Masanari Iida 提交于
      This patch fixed following errors while make htmldocs
      Warning(/mm/slab.c:1956): No description found for parameter 'page'
      Warning(/mm/slab.c:1956): Excess function parameter 'slabp' description in 'slab_destroy'
      
      Incorrect function parameter "slabp" was set instead of "page"
      Acked-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NMasanari Iida <standby24x7@gmail.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      cb8ee1a3
    • D
      mm: slub: work around unneeded lockdep warning · 67b6c900
      Dave Hansen 提交于
      The slub code does some setup during early boot in
      early_kmem_cache_node_alloc() with some local data.  There is no
      possible way that another CPU can see this data, so the slub code
      doesn't unnecessarily lock it.  However, some new lockdep asserts
      check to make sure that add_partial() _always_ has the list_lock
      held.
      
      Just add the locking, even though it is technically unnecessary.
      
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Signed-off-by: NPekka Enberg <penberg@kernel.org>
      67b6c900
    • V
      memcg: fix mutex not unlocked on memcg_create_kmem_cache fail path · 7c094fd6
      Vladimir Davydov 提交于
      Commit 842e2873 ("memcg: get rid of kmem_cache_dup()") introduced a
      mutex for memcg_create_kmem_cache() to protect the tmp_name buffer that
      holds the memcg name.  It failed to unlock the mutex if this buffer
      could not be allocated.
      
      This patch fixes the issue by appropriately unlocking the mutex if the
      allocation fails.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Glauber Costa <glommer@parallels.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7c094fd6
    • D
      mm, oom: base root bonus on current usage · 778c14af
      David Rientjes 提交于
      A 3% of system memory bonus is sometimes too excessive in comparison to
      other processes.
      
      With commit a63d83f4 ("oom: badness heuristic rewrite"), the OOM
      killer tries to avoid killing privileged tasks by subtracting 3% of
      overall memory (system or cgroup) from their per-task consumption.  But
      as a result, all root tasks that consume less than 3% of overall memory
      are considered equal, and so it only takes 33+ privileged tasks pushing
      the system out of memory for the OOM killer to do something stupid and
      kill dhclient or other root-owned processes.  For example, on a 32G
      machine it can't tell the difference between the 1M agetty and the 10G
      fork bomb member.
      
      The changelog describes this 3% boost as the equivalent to the global
      overcommit limit being 3% higher for privileged tasks, but this is not
      the same as discounting 3% of overall memory from _every privileged task
      individually_ during OOM selection.
      
      Replace the 3% of system memory bonus with a 3% of current memory usage
      bonus.
      
      By giving root tasks a bonus that is proportional to their actual size,
      they remain comparable even when relatively small.  In the example
      above, the OOM killer will discount the 1M agetty's 256 badness points
      down to 179, and the 10G fork bomb's 262144 points down to 183500 points
      and make the right choice, instead of discounting both to 0 and killing
      agetty because it's first in the task list.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Reported-by: NJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      778c14af
    • D
      mm/slub.c: fix page->_count corruption (again) · a0320865
      Dave Hansen 提交于
      Commit abca7c49 ("mm: fix slab->page _count corruption when using
      slub") notes that we can not _set_ a page->counters directly, except
      when using a real double-cmpxchg.  Doing so can lose updates to
      ->_count.
      
      That is an absolute rule:
      
              You may not *set* page->counters except via a cmpxchg.
      
      Commit abca7c49 fixed this for the folks who have the slub
      cmpxchg_double code turned off at compile time, but it left the bad case
      alone.  It can still be reached, and the same bug triggered in two
      cases:
      
      1. Turning on slub debugging at runtime, which is available on
         the distro kernels that I looked at.
      2. On 64-bit CPUs with no CMPXCHG16B (some early AMD x86-64
         cpus, evidently)
      
      There are at least 3 ways we could fix this:
      
      1. Take all of the exising calls to cmpxchg_double_slab() and
         __cmpxchg_double_slab() and convert them to take an old, new
         and target 'struct page'.
      2. Do (1), but with the newly-introduced 'slub_data'.
      3. Do some magic inside the two cmpxchg...slab() functions to
         pull the counters out of new_counters and only set those
         fields in page->{inuse,frozen,objects}.
      
      I've done (2) as well, but it's a bunch more code.  This patch is an
      attempt at (3).  This was the most straightforward and foolproof way
      that I could think to do this.
      
      This would also technically allow us to get rid of the ugly
      
      #if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \
             defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
      
      in 'struct page', but leaving it alone has the added benefit that
      'counters' stays 'unsigned' instead of 'unsigned long', so all the
      copies that the slub code does stay a bit smaller.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Matt Mackall <mpm@selenic.com>
      Cc: Pravin B Shelar <pshelar@nicira.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a0320865
    • D
      mm/mempolicy.c: fix mempolicy printing in numa_maps · 8790c71a
      David Rientjes 提交于
      As a result of commit 5606e387 ("mm: numa: Migrate on reference
      policy"), /proc/<pid>/numa_maps prints the mempolicy for any <pid> as
      "prefer:N" for the local node, N, of the process reading the file.
      
      This should only be printed when the mempolicy of <pid> is
      MPOL_PREFERRED for node N.
      
      If the process is actually only using the default mempolicy for local
      node allocation, make sure "default" is printed as expected.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Reported-by: NRobert Lippert <rlippert@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: <stable@vger.kernel.org>	[3.7+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8790c71a
    • M
      zsmalloc: add copyright · 31fc00bb
      Minchan Kim 提交于
      Add my copyright to the zsmalloc source code which I maintain.
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      31fc00bb
    • M
      zsmalloc: move it under mm · bcf1647d
      Minchan Kim 提交于
      This patch moves zsmalloc under mm directory.
      
      Before that, description will explain why we have needed custom
      allocator.
      
      Zsmalloc is a new slab-based memory allocator for storing compressed
      pages.  It is designed for low fragmentation and high allocation success
      rate on large object, but <= PAGE_SIZE allocations.
      
      zsmalloc differs from the kernel slab allocator in two primary ways to
      achieve these design goals.
      
      zsmalloc never requires high order page allocations to back slabs, or
      "size classes" in zsmalloc terms.  Instead it allows multiple
      single-order pages to be stitched together into a "zspage" which backs
      the slab.  This allows for higher allocation success rate under memory
      pressure.
      
      Also, zsmalloc allows objects to span page boundaries within the zspage.
      This allows for lower fragmentation than could be had with the kernel
      slab allocator for objects between PAGE_SIZE/2 and PAGE_SIZE.  With the
      kernel slab allocator, if a page compresses to 60% of it original size,
      the memory savings gained through compression is lost in fragmentation
      because another object of the same size can't be stored in the leftover
      space.
      
      This ability to span pages results in zsmalloc allocations not being
      directly addressable by the user.  The user is given an
      non-dereferencable handle in response to an allocation request.  That
      handle must be mapped, using zs_map_object(), which returns a pointer to
      the mapped region that can be used.  The mapping is necessary since the
      object data may reside in two different noncontigious pages.
      
      The zsmalloc fulfills the allocation needs for zram perfectly
      
      [sjenning@linux.vnet.ibm.com: borrow Seth's quote]
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Acked-by: NNitin Gupta <ngupta@vflare.org>
      Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Seth Jennings <sjenning@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bcf1647d
  7. 30 1月, 2014 8 次提交
  8. 28 1月, 2014 4 次提交
  9. 26 1月, 2014 2 次提交
    • S
      Fix race when checking i_size on direct i/o read · 9fe55eea
      Steven Whitehouse 提交于
      So far I've had one ACK for this, and no other comments. So I think it
      is probably time to send this via some suitable tree. I'm guessing that
      the vfs tree would be the most appropriate route, but not sure that
      there is one at the moment (don't see anything recent at kernel.org)
      so in that case I think -mm is the "back up plan". Al, please let me
      know if you will take this?
      
      Steve.
      
      ---------------------
      
      Following on from the "Re: [PATCH v3] vfs: fix a bug when we do some dio
      reads with append dio writes" thread on linux-fsdevel, this patch is my
      current version of the fix proposed as option (b) in that thread.
      
      Removing the i_size test from the direct i/o read path at vfs level
      means that filesystems now have to deal with requests which are beyond
      i_size themselves. These I've divided into three sets:
      
       a) Those with "no op" ->direct_IO (9p, cifs, ceph)
      These are obviously not going to be an issue
      
       b) Those with "home brew" ->direct_IO (nfs, fuse)
      I've been told that NFS should not have any problem with the larger
      i_size, however I've added an extra test to FUSE to duplicate the
      original behaviour just to be on the safe side.
      
       c) Those using __blockdev_direct_IO()
      These call through to ->get_block() which should deal with the EOF
      condition correctly. I've verified that with GFS2 and I believe that
      Zheng has verified it for ext4. I've also run the test on XFS and it
      passes both before and after this change.
      
      The part of the patch in filemap.c looks a lot larger than it really is
      - there are only two lines of real change. The rest is just indentation
      of the contained code.
      
      There remains a test of i_size though, which was added for btrfs. It
      doesn't cause the other filesystems a problem as the test is performed
      after ->direct_IO has been called. It is possible that there is a race
      that does matter to btrfs, however this patch doesn't change that, so
      its still an overall improvement.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      Reported-by: NZheng Liu <gnehzuil.liu@gmail.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Dave Chinner <david@fromorbit.com>
      Acked-by: NMiklos Szeredi <miklos@szeredi.hu>
      Cc: Chris Mason <clm@fb.com>
      Cc: Josef Bacik <jbacik@fb.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      9fe55eea
    • C
      fs: remove generic_acl · feda821e
      Christoph Hellwig 提交于
      And instead convert tmpfs to use the new generic ACL code, with two stub
      methods provided for in-memory filesystems.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      feda821e
  10. 25 1月, 2014 1 次提交
  11. 24 1月, 2014 7 次提交