1. 18 12月, 2012 1 次提交
  2. 12 12月, 2012 1 次提交
  3. 09 12月, 2012 1 次提交
    • J
      cifs: simplify id_to_sid and sid_to_id mapping code · faa65f07
      Jeff Layton 提交于
      The cifs.idmap handling code currently causes the kernel to cache the
      data from userspace twice. It first looks in a rbtree to see if there is
      a matching entry for the given id. If there isn't then it calls
      request_key which then checks its cache and then calls out to userland
      if it doesn't have one. If the userland program establishes a mapping
      and downcalls with that info, it then gets cached in the keyring and in
      this rbtree.
      
      Aside from the double memory usage and the performance penalty in doing
      all of these extra copies, there are some nasty bugs in here too. The
      code declares four rbtrees and spinlocks to protect them, but only seems
      to use two of them. The upshot is that the same tree is used to hold
      (eg) uid:sid and sid:uid mappings. The comparitors aren't equipped to
      deal with that.
      
      I think we'd be best off to remove a layer of caching in this code. If
      this was originally done for performance reasons, then that really seems
      like a premature optimization.
      
      This patch does that -- it removes the rbtrees and the locks that
      protect them and simply has the code do a request_key call on each call
      into sid_to_id and id_to_sid. This greatly simplifies this code and
      should roughly halve the memory utilization from using the idmapping
      code.
      Reviewed-by: NShirish Pargaonkar <shirishpargaonkar@gmail.com>
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NSteve French <smfrench@gmail.com>
      faa65f07
  4. 06 12月, 2012 2 次提交
  5. 03 10月, 2012 1 次提交
  6. 25 9月, 2012 7 次提交
  7. 25 7月, 2012 2 次提交
  8. 24 7月, 2012 1 次提交
  9. 14 7月, 2012 3 次提交
  10. 23 5月, 2012 1 次提交
  11. 17 5月, 2012 4 次提交
  12. 10 5月, 2012 1 次提交
  13. 06 5月, 2012 1 次提交
  14. 02 5月, 2012 1 次提交
  15. 25 4月, 2012 2 次提交
  16. 24 3月, 2012 1 次提交
  17. 22 3月, 2012 3 次提交
  18. 21 3月, 2012 2 次提交
  19. 20 3月, 2012 1 次提交
  20. 07 1月, 2012 2 次提交
  21. 04 1月, 2012 1 次提交
  22. 28 10月, 2011 1 次提交
    • A
      vfs: do (nearly) lockless generic_file_llseek · ef3d0fd2
      Andi Kleen 提交于
      The i_mutex lock use of generic _file_llseek hurts.  Independent processes
      accessing the same file synchronize over a single lock, even though
      they have no need for synchronization at all.
      
      Under high utilization this can cause llseek to scale very poorly on larger
      systems.
      
      This patch does some rethinking of the llseek locking model:
      
      First the 64bit f_pos is not necessarily atomic without locks
      on 32bit systems. This can already cause races with read() today.
      This was discussed on linux-kernel in the past and deemed acceptable.
      The patch does not change that.
      
      Let's look at the different seek variants:
      
      SEEK_SET: Doesn't really need any locking.
      If there's a race one writer wins, the other loses.
      
      For 32bit the non atomic update races against read()
      stay the same. Without a lock they can also happen
      against write() now.  The read() race was deemed
      acceptable in past discussions, and I think if it's
      ok for read it's ok for write too.
      
      => Don't need a lock.
      
      SEEK_END: This behaves like SEEK_SET plus it reads
      the maximum size too. Reading the maximum size would have the
      32bit atomic problem. But luckily we already have a way to read
      the maximum size without locking (i_size_read), so we
      can just use that instead.
      
      Without i_mutex there is no synchronization with write() anymore,
      however since the write() update is atomic on 64bit it just behaves
      like another racy SEEK_SET.  On non atomic 32bit it's the same
      as SEEK_SET.
      
      => Don't need a lock, but need to use i_size_read()
      
      SEEK_CUR: This has a read-modify-write race window
      on the same file. One could argue that any application
      doing unsynchronized seeks on the same file is already broken.
      But for the sake of not adding a regression here I'm
      using the file->f_lock to synchronize this. Using this
      lock is much better than the inode mutex because it doesn't
      synchronize between processes.
      
      => So still need a lock, but can use a f_lock.
      
      This patch implements this new scheme in generic_file_llseek.
      I dropped generic_file_llseek_unlocked and changed all callers.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      ef3d0fd2