1. 27 7月, 2008 1 次提交
  2. 26 7月, 2008 3 次提交
    • M
      locks: allow ->lock() to return FILE_LOCK_DEFERRED · 764c76b3
      Miklos Szeredi 提交于
      Allow filesystem's ->lock() method to call posix_lock_file() instead of
      posix_lock_file_wait(), and return FILE_LOCK_DEFERRED.  This makes it
      possible to implement a such a ->lock() function, that works with the lock
      manager, which needs the call to be asynchronous.
      
      Now the vfs_lock_file() helper can be used, so this is a cleanup as well.
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Cc: Matthew Wilcox <matthew@wil.cx>
      Cc: David Teigland <teigland@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      764c76b3
    • M
      locks: cleanup code duplication · b648a6de
      Miklos Szeredi 提交于
      Extract common code into a function.
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Cc: Matthew Wilcox <matthew@wil.cx>
      Cc: David Teigland <teigland@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b648a6de
    • M
      locks: add special return value for asynchronous locks · bde74e4b
      Miklos Szeredi 提交于
      Use a special error value FILE_LOCK_DEFERRED to mean that a locking
      operation returned asynchronously.  This is returned by
      
        posix_lock_file() for sleeping locks to mean that the lock has been
        queued on the block list, and will be woken up when it might become
        available and needs to be retried (either fl_lmops->fl_notify() is
        called or fl_wait is woken up).
      
        f_op->lock() to mean either the above, or that the filesystem will
        call back with fl_lmops->fl_grant() when the result of the locking
        operation is known.  The filesystem can do this for sleeping as well
        as non-sleeping locks.
      
      This is to make sure, that return values of -EAGAIN and -EINPROGRESS by
      filesystems are not mistaken to mean an asynchronous locking.
      
      This also makes error handling in fs/locks.c and lockd/svclock.c slightly
      cleaner.
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Cc: Matthew Wilcox <matthew@wil.cx>
      Cc: David Teigland <teigland@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bde74e4b
  3. 23 6月, 2008 1 次提交
  4. 12 5月, 2008 1 次提交
    • L
      Add new 'cond_resched_bkl()' helper function · c3921ab7
      Linus Torvalds 提交于
      It acts exactly like a regular 'cond_resched()', but will not get
      optimized away when CONFIG_PREEMPT is set.
      
      Normal kernel code is already preemptable in the presense of
      CONFIG_PREEMPT, so cond_resched() is optimized away (see commit
      02b67cc3 "sched: do not do
      cond_resched() when CONFIG_PREEMPT").
      
      But when wanting to conditionally reschedule while holding a lock, you
      need to use "cond_sched_lock(lock)", and the new function is the BKL
      equivalent of that.
      
      Also make fs/locks.c use it.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c3921ab7
  5. 07 5月, 2008 1 次提交
    • A
      [PATCH] fix SMP ordering hole in fcntl_setlk() · 0b2bac2f
      Al Viro 提交于
      fcntl_setlk()/close() race prevention has a subtle hole - we need to
      make sure that if we *do* have an fcntl/close race on SMP box, the
      access to descriptor table and inode->i_flock won't get reordered.
      
      As it is, we get STORE inode->i_flock, LOAD descriptor table entry vs.
      STORE descriptor table entry, LOAD inode->i_flock with not a single
      lock in common on both sides.  We do have BKL around the first STORE,
      but check in locks_remove_posix() is outside of BKL and for a good
      reason - we don't want BKL on common path of close(2).
      
      Solution is to hold ->file_lock around fcheck() in there; that orders
      us wrt removal from descriptor table that preceded locks_remove_posix()
      on close path and we either come first (in which case eviction will be
      handled by the close side) or we'll see the effect of close and do
      eviction ourselves.  Note that even though it's read-only access,
      we do need ->file_lock here - rcu_read_lock() won't be enough to
      order the things.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      0b2bac2f
  6. 02 5月, 2008 1 次提交
  7. 26 4月, 2008 6 次提交
  8. 19 4月, 2008 1 次提交
  9. 15 4月, 2008 1 次提交
    • J
      locks: fix possible infinite loop in fcntl(F_SETLKW) over nfs · 19e729a9
      J. Bruce Fields 提交于
      Miklos Szeredi found the bug:
      
      	"Basically what happens is that on the server nlm_fopen() calls
      	nfsd_open() which returns -EACCES, to which nlm_fopen() returns
      	NLM_LCK_DENIED.
      
      	"On the client this will turn into a -EAGAIN (nlm_stat_to_errno()),
      	which in will cause fcntl_setlk() to retry forever."
      
      So, for example, opening a file on an nfs filesystem, changing
      permissions to forbid further access, then trying to lock the file,
      could result in an infinite loop.
      
      And Trond Myklebust identified the culprit, from Marc Eshel and I:
      
      	7723ec97 "locks: factor out
      	generic/filesystem switch from setlock code"
      
      That commit claimed to just be reshuffling code, but actually introduced
      a behavioral change by calling the lock method repeatedly as long as it
      returned -EAGAIN.
      
      We assumed this would be safe, since we assumed a lock of type SETLKW
      would only return with either success or an error other than -EAGAIN.
      However, nfs does can in fact return -EAGAIN in this situation, and
      independently of whether that behavior is correct or not, we don't
      actually need this change, and it seems far safer not to depend on such
      assumptions about the filesystem's ->lock method.
      
      Therefore, revert the problematic part of the original commit.  This
      leaves vfs_lock_file() and its other callers unchanged, while returning
      fcntl_setlk and fcntl_setlk64 to their former behavior.
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      Tested-by: NMiklos Szeredi <mszeredi@suse.cz>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Cc: Marc Eshel <eshel@almaden.ibm.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      19e729a9
  10. 20 3月, 2008 1 次提交
    • R
      fs: fix kernel-doc notation warnings · a6b91919
      Randy Dunlap 提交于
      Fix kernel-doc notation warnings in fs/.
      
      Warning(mmotm-2008-0314-1449//fs/super.c:560): missing initial short description on line:
       *	mark_files_ro
      Warning(mmotm-2008-0314-1449//fs/locks.c:1277): missing initial short description on line:
       *	lease_get_mtime
      Warning(mmotm-2008-0314-1449//fs/locks.c:1277): missing initial short description on line:
       *	lease_get_mtime
      Warning(mmotm-2008-0314-1449//fs/namei.c:1368): missing initial short description on line:
       * lookup_one_len:  filesystem helper to lookup single pathname component
      Warning(mmotm-2008-0314-1449//fs/buffer.c:3221): missing initial short description on line:
       * bh_uptodate_or_lock: Test whether the buffer is uptodate
      Warning(mmotm-2008-0314-1449//fs/buffer.c:3240): missing initial short description on line:
       * bh_submit_read: Submit a locked buffer for reading
      Warning(mmotm-2008-0314-1449//fs/fs-writeback.c:30): missing initial short description on line:
       * writeback_acquire: attempt to get exclusive writeback access to a device
      Warning(mmotm-2008-0314-1449//fs/fs-writeback.c:47): missing initial short description on line:
       * writeback_in_progress: determine whether there is writeback in progress
      Warning(mmotm-2008-0314-1449//fs/fs-writeback.c:58): missing initial short description on line:
       * writeback_release: relinquish exclusive writeback access against a device.
      Warning(mmotm-2008-0314-1449//include/linux/jbd.h:351): contents before sections
      Warning(mmotm-2008-0314-1449//include/linux/jbd.h:561): contents before sections
      Warning(mmotm-2008-0314-1449//fs/jbd/transaction.c:1935): missing initial short description on line:
       * void journal_invalidatepage()
      Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a6b91919
  11. 09 2月, 2008 1 次提交
  12. 04 2月, 2008 3 次提交
  13. 31 10月, 2007 1 次提交
  14. 17 10月, 2007 1 次提交
  15. 10 10月, 2007 9 次提交
    • P
      Rework /proc/locks via seq_files and seq_list helpers · 7f8ada98
      Pavel Emelyanov 提交于
      Currently /proc/locks is shown with a proc_read function, but its behavior
      is rather complex as it has to manually handle current offset and buffer
      length.  On the other hand, files that show objects from lists can be
      easily reimplemented using the sequential files and the seq_list_XXX()
      helpers.
      
      This saves (as usually) 16 lines of code and more than 200 from
      the .text section.
      
      [akpm@linux-foundation.org: no externs in C]
      [akpm@linux-foundation.org: warning fixes]
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      7f8ada98
    • M
      fs/locks.c: use list_for_each_entry() instead of list_for_each() · 094f2825
      Matthias Kaehlcke 提交于
      fs/locks.c: use list_for_each_entry() instead of list_for_each() in
      posix_locks_deadlock() and get_locks_status()
      Signed-off-by: NMatthias Kaehlcke <matthias.kaehlcke@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      094f2825
    • P
      Cleanup macros for distinguishing mandatory locks · a16877ca
      Pavel Emelyanov 提交于
      The combination of S_ISGID bit set and S_IXGRP bit unset is used to mark the
      inode as "mandatory lockable" and there's a macro for this check called
      MANDATORY_LOCK(inode).  However, fs/locks.c and some filesystems still perform
      the explicit i_mode checking.  Besides, Andrew pointed out, that this macro is
      buggy itself, as it dereferences the inode arg twice.
      
      Convert this macro into static inline function and switch its users to it,
      making the code shorter and more readable.
      
      The __mandatory_lock() helper is to be used in places where the IS_MANDLOCK()
      for superblock is already known to be true.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Eric Van Hensbergen <ericvh@gmail.com>
      Cc: Ron Minnich <rminnich@sandia.gov>
      Cc: Latchesar Ionkov <lucho@ionkov.net>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      a16877ca
    • P
      locks: Fix potential OOPS in generic_setlease() · 85c59580
      Pavel Emelyanov 提交于
      This code is run under lock_kernel(), which is dropped during
      sleeping operations, so the following race is possible:
      
      CPU1:                                CPU2:
        vfs_setlease();                    vfs_setlease();
        lock_kernel();
                                           lock_kernel(); /* spin */
        generic_setlease():
          ...
          for (before = ...)
          /* here we found some lease after
           * which we will insert the new one
           */
          fl = locks_alloc_lock();
          /* go to sleep in this allocation and
           * drop the BKL
           */
                                           generic_setlease():
                                             ...
                                             for (before = ...)
                                             /* here we find the "before" pointing
                                              * at the one we found on CPU1
                                              */
                                            ->fl_change(my_before, arg);
                                                    lease_modify();
                                                           locks_free_lock();
                                                           /* and we freed it */
                                           ...
                                           unlock_kernel();
         locks_insert_lock(before, fl);
         /* OOPS! We have just tried to add the lease
          * at the tail of already removed one
          */
      
      The similar races are already handled in other code - all the
      allocations are performed before any checks/updates.
      
      Thanks to Kamalesh Babulal for testing and for a bug report on an
      earlier version.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      Cc: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
      85c59580
    • P
      Use list_first_entry in locks_wake_up_blocks · f0c1cd0e
      Pavel Emelyanov 提交于
      This routine deletes all the elements from the list
      with the "while (!list_empty())" loop, and we already
      have a list_first_entry() macro to help it look nicer :)
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      f0c1cd0e
    • J
      locks: fix flock_lock_file() comment · 02888f41
      J. Bruce Fields 提交于
      This comment wasn't updated when lease support was added, and it makes
      essentially the same mistake that the code made before a recent bugfix.
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      02888f41
    • P
      Memory shortage can result in inconsistent flocks state · 84d535ad
      Pavel Emelyanov 提交于
      When the flock_lock_file() is called to change the flock
      from F_RDLCK to F_WRLCK or vice versa the existing flock
      can be removed without appropriate warning.
      
      Look:
              for_each_lock(inode, before) {
                      struct file_lock *fl = *before;
                      if (IS_POSIX(fl))
                              break;
                      if (IS_LEASE(fl))
                              continue;
                      if (filp != fl->fl_file)
                              continue;
                      if (request->fl_type == fl->fl_type)
                              goto out;
                      found = 1;
                      locks_delete_lock(before); <<<<<< !
                      break;
              }
      
      if after this point the subsequent locks_alloc_lock() will
      fail the return code will be -ENOMEM, but the existing lock
      is already removed.
      
      This is a known feature that such "re-locking" is not atomic,
      but in the racy case the file should stay locked (although by
      some other process), but in this case the file will be unlocked.
      
      The proposal is to prepare the lock in advance keeping no chance
      to fail in the future code.
      
      Found during making the flocks pid-namespaces aware.
      
      (Note: Thanks to Reuben Farrelly for finding a bug in an earlier version
      of this patch.)
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      Cc: Reuben Farrelly <reuben-linuxkernel@reub.net>
      84d535ad
    • J
      locks: kill redundant local variable · 526985b9
      J. Bruce Fields 提交于
      There's no need for another variable local to this loop; we can use the
      variable (of the same name!) already declared at the top of the function,
      and not used till later (at which point it's initialized, so this is safe).
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      526985b9
    • J
      locks: reverse order of posix_locks_conflict() arguments · b842e240
      J. Bruce Fields 提交于
      The first argument to posix_locks_conflict() is meant to be a lock request,
      and the second a lock from an inode's lock request.  It doesn't really
      make a difference which order you call them in, since the only
      asymmetric test in posix_lock_conflict() is the check whether the second
      argument is a posix lock--and every caller already does that check for
      some reason.
      
      But may as well fix posix_test_lock() to call posix_locks_conflict()
      with the arguments in the same order as everywhere else.
      Signed-off-by: N"J. Bruce Fields" <bfields@citi.umich.edu>
      b842e240
  16. 12 9月, 2007 1 次提交
    • P
      Leases can be hidden by flocks · 0e2f6db8
      Pavel Emelyanov 提交于
      The inode->i_flock list contains the leases, flocks and posix
      locks in the specified order. However, the flocks are added in
      the head of this list thus hiding the leases from F_GETLEASE
      command, from time_out_leases() and other code that expects
      the leases to come first.
      
      The following example will demonstrate this:
      
      #define _GNU_SOURCE
      
      #include <unistd.h>
      #include <fcntl.h>
      #include <stdio.h>
      #include <sys/file.h>
      
      static void show_lease(int fd)
      {
              int res;
      
              res = fcntl(fd, F_GETLEASE);
              switch (res) {
                      case F_RDLCK:
                              printf("Read lease\n");
                              break;
                      case F_WRLCK:
                              printf("Write lease\n");
                              break;
                      case F_UNLCK:
                              printf("No leases\n");
                              break;
                      default:
                              printf("Some shit\n");
                              break;
              }
      }
      
      int main(int argc, char **argv)
      {
              int fd, res;
      
              fd = open(argv[1], O_RDONLY);
              if (fd == -1) {
                      perror("Can't open file");
                      return 1;
              }
      
              res = fcntl(fd, F_SETLEASE, F_WRLCK);
              if (res == -1) {
                      perror("Can't set lease");
                      return 1;
              }
      
              show_lease(fd);
      
              if (flock(fd, LOCK_SH) == -1) {
                      perror("Can't flock shared");
                      return 1;
              }
      
              show_lease(fd);
      
              return 0;
      }
      
      The first call to show_lease() will show the write lease set, but
      the second will show no leases.
      
      Fix the flock adding so that the leases always stay in the head
      of this list.
      
      Found during making the flocks pid-namespaces aware.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Acked-by: N"J. Bruce Fields" <bfields@fieldses.org>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0e2f6db8
  17. 01 8月, 2007 1 次提交
  18. 20 7月, 2007 1 次提交
    • P
      mm: Remove slab destructors from kmem_cache_create(). · 20c2df83
      Paul Mundt 提交于
      Slab destructors were no longer supported after Christoph's
      c59def9f change. They've been
      BUGs for both slab and slub, and slob never supported them
      either.
      
      This rips out support for the dtor pointer from kmem_cache_create()
      completely and fixes up every single callsite in the kernel (there were
      about 224, not including the slab allocator definitions themselves,
      or the documentation references).
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      20c2df83
  19. 19 7月, 2007 5 次提交