1. 19 10月, 2016 1 次提交
  2. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  3. 16 2月, 2016 1 次提交
    • D
      mm/gup: Switch all callers of get_user_pages() to not pass tsk/mm · d4edcf0d
      Dave Hansen 提交于
      We will soon modify the vanilla get_user_pages() so it can no
      longer be used on mm/tasks other than 'current/current->mm',
      which is by far the most common way it is called.  For now,
      we allow the old-style calls, but warn when they are used.
      (implemented in previous patch)
      
      This patch switches all callers of:
      
      	get_user_pages()
      	get_user_pages_unlocked()
      	get_user_pages_locked()
      
      to stop passing tsk/mm so they will no longer see the warnings.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: jack@suse.cz
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20160212210156.113E9407@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d4edcf0d
  4. 27 1月, 2015 1 次提交
  5. 22 8月, 2014 1 次提交
  6. 26 7月, 2014 1 次提交
  7. 04 2月, 2014 1 次提交
    • H
      [media] Revert "[media] videobuf_vm_{open,close} race fixes" · cca36e2e
      Hans Verkuil 提交于
      This reverts commit a242f426.
      
      That commit actually caused deadlocks, rather then fixing them.
      
      If ext_lock is set to NULL (otherwise videobuf_queue_lock doesn't do
      anything), then you get this deadlock:
      
      The driver's mmap function calls videobuf_mmap_mapper which calls
      videobuf_queue_lock on q. videobuf_mmap_mapper calls  __videobuf_mmap_mapper,
      __videobuf_mmap_mapper calls videobuf_vm_open and videobuf_vm_open
      calls videobuf_queue_lock on q (introduced by above patch): deadlocked.
      
      This affects drivers using dma-contig and dma-vmalloc. Only dma-sg is
      not affected since it doesn't call videobuf_vm_open from __videobuf_mmap_mapper.
      
      Most drivers these days have a non-NULL ext_lock. Those that still use
      NULL there are all fairly obscure drivers, which is why this hasn't been
      seen earlier.
      
      Since everything worked perfectly fine for many years I prefer to just
      revert this patch rather than trying to fix it. videobuf is quite fragile
      and I rather not touch it too much. Work is (slowly) progressing to move
      everything over to vb2 or at the very least use non-NULL ext_lock in
      videobuf.
      Signed-off-by: NHans Verkuil <hans.verkuil@cisco.com>
      Cc: <stable@vger.kernel.org>      # for v3.11 and up
      Cc: Al Viro <viro@ZenIV.linux.org.uk>
      Reported-by: NPete Eberlein <pete@sensoray.com>
      Signed-off-by: NMauro Carvalho Chehab <m.chehab@samsung.com>
      cca36e2e
  8. 21 5月, 2013 1 次提交
  9. 09 10月, 2012 1 次提交
    • K
      mm: kill vma flag VM_RESERVED and mm->reserved_vm counter · 314e51b9
      Konstantin Khlebnikov 提交于
      A long time ago, in v2.4, VM_RESERVED kept swapout process off VMA,
      currently it lost original meaning but still has some effects:
      
       | effect                 | alternative flags
      -+------------------------+---------------------------------------------
      1| account as reserved_vm | VM_IO
      2| skip in core dump      | VM_IO, VM_DONTDUMP
      3| do not merge or expand | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP
      4| do not mlock           | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP
      
      This patch removes reserved_vm counter from mm_struct.  Seems like nobody
      cares about it, it does not exported into userspace directly, it only
      reduces total_vm showed in proc.
      
      Thus VM_RESERVED can be replaced with VM_IO or pair VM_DONTEXPAND | VM_DONTDUMP.
      
      remap_pfn_range() and io_remap_pfn_range() set VM_IO|VM_DONTEXPAND|VM_DONTDUMP.
      remap_vmalloc_range() set VM_DONTEXPAND | VM_DONTDUMP.
      
      [akpm@linux-foundation.org: drivers/vfio/pci/vfio_pci.c fixup]
      Signed-off-by: NKonstantin Khlebnikov <khlebnikov@openvz.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Carsten Otte <cotte@de.ibm.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Cyrill Gorcunov <gorcunov@openvz.org>
      Cc: Eric Paris <eparis@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Morris <james.l.morris@oracle.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Kentaro Takeda <takedakn@nttdata.co.jp>
      Cc: Matt Helsley <matthltc@us.ibm.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Venkatesh Pallipadi <venki@google.com>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      314e51b9
  10. 14 8月, 2012 1 次提交
  11. 28 7月, 2011 1 次提交
  12. 29 12月, 2010 3 次提交
  13. 21 10月, 2010 4 次提交
  14. 28 9月, 2010 1 次提交
    • H
      V4L/DVB: videobuf-dma-sg: set correct size in last sg element · 2fc11536
      Hans Verkuil 提交于
      This fixes a nasty memory corruption bug when using userptr I/O.
      The function videobuf_pages_to_sg() sets up the scatter-gather list for the
      DMA transfer to the userspace pages. The first transfer is setup correctly
      (the size is set to PAGE_SIZE - offset), but all other transfers have size
      PAGE_SIZE. This is wrong for the last transfer which may be less than PAGE_SIZE.
      
      Most, if not all, drivers will program the boards DMA engine correctly, i.e.
      even though the size in the last sg element is wrong, they will do their
      own size calculations and make sure the right amount is DMA-ed, and so seemingly
      prevent memory corruption.
      
      However, behind the scenes the dynamic DMA mapping support (in lib/swiotlb.c)
      may create bounce buffers if the memory pages are not in DMA-able memory.
      This happens for example on a 64-bit linux with a board that only supports
      32-bit DMA.
      
      These bounce buffers DO use the information in the sg list to determine the
      size. So while the DMA engine transfers the correct amount of data, when the
      data is 'bounced' back too much is copied, causing buffer overwrites.
      
      The fix is simple: calculate and set the correct size for the last sg list
      element.
      Signed-off-by: NHans Verkuil <hans.verkuil@tandberg.com>
      Cc: stable@kernel.org
      Signed-off-by: NMauro Carvalho Chehab <mchehab@redhat.com>
      2fc11536
  15. 03 8月, 2010 5 次提交
  16. 19 5月, 2010 8 次提交
  17. 18 5月, 2010 1 次提交
  18. 27 2月, 2010 1 次提交
  19. 06 12月, 2009 2 次提交
  20. 12 10月, 2009 1 次提交
  21. 28 9月, 2009 1 次提交
  22. 17 6月, 2009 2 次提交