1. 10 10月, 2007 9 次提交
    • P
      Rework /proc/locks via seq_files and seq_list helpers · 7f8ada98
      Pavel Emelyanov 提交于
      Currently /proc/locks is shown with a proc_read function, but its behavior
      is rather complex as it has to manually handle current offset and buffer
      length.  On the other hand, files that show objects from lists can be
      easily reimplemented using the sequential files and the seq_list_XXX()
      helpers.
      
      This saves (as usually) 16 lines of code and more than 200 from
      the .text section.
      
      [akpm@linux-foundation.org: no externs in C]
      [akpm@linux-foundation.org: warning fixes]
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      7f8ada98
    • M
      fs/locks.c: use list_for_each_entry() instead of list_for_each() · 094f2825
      Matthias Kaehlcke 提交于
      fs/locks.c: use list_for_each_entry() instead of list_for_each() in
      posix_locks_deadlock() and get_locks_status()
      Signed-off-by: NMatthias Kaehlcke <matthias.kaehlcke@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      094f2825
    • P
      Cleanup macros for distinguishing mandatory locks · a16877ca
      Pavel Emelyanov 提交于
      The combination of S_ISGID bit set and S_IXGRP bit unset is used to mark the
      inode as "mandatory lockable" and there's a macro for this check called
      MANDATORY_LOCK(inode).  However, fs/locks.c and some filesystems still perform
      the explicit i_mode checking.  Besides, Andrew pointed out, that this macro is
      buggy itself, as it dereferences the inode arg twice.
      
      Convert this macro into static inline function and switch its users to it,
      making the code shorter and more readable.
      
      The __mandatory_lock() helper is to be used in places where the IS_MANDLOCK()
      for superblock is already known to be true.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Eric Van Hensbergen <ericvh@gmail.com>
      Cc: Ron Minnich <rminnich@sandia.gov>
      Cc: Latchesar Ionkov <lucho@ionkov.net>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      a16877ca
    • P
      locks: Fix potential OOPS in generic_setlease() · 85c59580
      Pavel Emelyanov 提交于
      This code is run under lock_kernel(), which is dropped during
      sleeping operations, so the following race is possible:
      
      CPU1:                                CPU2:
        vfs_setlease();                    vfs_setlease();
        lock_kernel();
                                           lock_kernel(); /* spin */
        generic_setlease():
          ...
          for (before = ...)
          /* here we found some lease after
           * which we will insert the new one
           */
          fl = locks_alloc_lock();
          /* go to sleep in this allocation and
           * drop the BKL
           */
                                           generic_setlease():
                                             ...
                                             for (before = ...)
                                             /* here we find the "before" pointing
                                              * at the one we found on CPU1
                                              */
                                            ->fl_change(my_before, arg);
                                                    lease_modify();
                                                           locks_free_lock();
                                                           /* and we freed it */
                                           ...
                                           unlock_kernel();
         locks_insert_lock(before, fl);
         /* OOPS! We have just tried to add the lease
          * at the tail of already removed one
          */
      
      The similar races are already handled in other code - all the
      allocations are performed before any checks/updates.
      
      Thanks to Kamalesh Babulal for testing and for a bug report on an
      earlier version.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      Cc: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
      85c59580
    • P
      Use list_first_entry in locks_wake_up_blocks · f0c1cd0e
      Pavel Emelyanov 提交于
      This routine deletes all the elements from the list
      with the "while (!list_empty())" loop, and we already
      have a list_first_entry() macro to help it look nicer :)
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      f0c1cd0e
    • J
      locks: fix flock_lock_file() comment · 02888f41
      J. Bruce Fields 提交于
      This comment wasn't updated when lease support was added, and it makes
      essentially the same mistake that the code made before a recent bugfix.
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      02888f41
    • P
      Memory shortage can result in inconsistent flocks state · 84d535ad
      Pavel Emelyanov 提交于
      When the flock_lock_file() is called to change the flock
      from F_RDLCK to F_WRLCK or vice versa the existing flock
      can be removed without appropriate warning.
      
      Look:
              for_each_lock(inode, before) {
                      struct file_lock *fl = *before;
                      if (IS_POSIX(fl))
                              break;
                      if (IS_LEASE(fl))
                              continue;
                      if (filp != fl->fl_file)
                              continue;
                      if (request->fl_type == fl->fl_type)
                              goto out;
                      found = 1;
                      locks_delete_lock(before); <<<<<< !
                      break;
              }
      
      if after this point the subsequent locks_alloc_lock() will
      fail the return code will be -ENOMEM, but the existing lock
      is already removed.
      
      This is a known feature that such "re-locking" is not atomic,
      but in the racy case the file should stay locked (although by
      some other process), but in this case the file will be unlocked.
      
      The proposal is to prepare the lock in advance keeping no chance
      to fail in the future code.
      
      Found during making the flocks pid-namespaces aware.
      
      (Note: Thanks to Reuben Farrelly for finding a bug in an earlier version
      of this patch.)
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      Cc: Reuben Farrelly <reuben-linuxkernel@reub.net>
      84d535ad
    • J
      locks: kill redundant local variable · 526985b9
      J. Bruce Fields 提交于
      There's no need for another variable local to this loop; we can use the
      variable (of the same name!) already declared at the top of the function,
      and not used till later (at which point it's initialized, so this is safe).
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      526985b9
    • J
      locks: reverse order of posix_locks_conflict() arguments · b842e240
      J. Bruce Fields 提交于
      The first argument to posix_locks_conflict() is meant to be a lock request,
      and the second a lock from an inode's lock request.  It doesn't really
      make a difference which order you call them in, since the only
      asymmetric test in posix_lock_conflict() is the check whether the second
      argument is a posix lock--and every caller already does that check for
      some reason.
      
      But may as well fix posix_test_lock() to call posix_locks_conflict()
      with the arguments in the same order as everywhere else.
      Signed-off-by: N"J. Bruce Fields" <bfields@citi.umich.edu>
      b842e240
  2. 12 9月, 2007 1 次提交
    • P
      Leases can be hidden by flocks · 0e2f6db8
      Pavel Emelyanov 提交于
      The inode->i_flock list contains the leases, flocks and posix
      locks in the specified order. However, the flocks are added in
      the head of this list thus hiding the leases from F_GETLEASE
      command, from time_out_leases() and other code that expects
      the leases to come first.
      
      The following example will demonstrate this:
      
      #define _GNU_SOURCE
      
      #include <unistd.h>
      #include <fcntl.h>
      #include <stdio.h>
      #include <sys/file.h>
      
      static void show_lease(int fd)
      {
              int res;
      
              res = fcntl(fd, F_GETLEASE);
              switch (res) {
                      case F_RDLCK:
                              printf("Read lease\n");
                              break;
                      case F_WRLCK:
                              printf("Write lease\n");
                              break;
                      case F_UNLCK:
                              printf("No leases\n");
                              break;
                      default:
                              printf("Some shit\n");
                              break;
              }
      }
      
      int main(int argc, char **argv)
      {
              int fd, res;
      
              fd = open(argv[1], O_RDONLY);
              if (fd == -1) {
                      perror("Can't open file");
                      return 1;
              }
      
              res = fcntl(fd, F_SETLEASE, F_WRLCK);
              if (res == -1) {
                      perror("Can't set lease");
                      return 1;
              }
      
              show_lease(fd);
      
              if (flock(fd, LOCK_SH) == -1) {
                      perror("Can't flock shared");
                      return 1;
              }
      
              show_lease(fd);
      
              return 0;
      }
      
      The first call to show_lease() will show the write lease set, but
      the second will show no leases.
      
      Fix the flock adding so that the leases always stay in the head
      of this list.
      
      Found during making the flocks pid-namespaces aware.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Acked-by: N"J. Bruce Fields" <bfields@fieldses.org>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0e2f6db8
  3. 01 8月, 2007 1 次提交
  4. 20 7月, 2007 1 次提交
    • P
      mm: Remove slab destructors from kmem_cache_create(). · 20c2df83
      Paul Mundt 提交于
      Slab destructors were no longer supported after Christoph's
      c59def9f change. They've been
      BUGs for both slab and slub, and slob never supported them
      either.
      
      This rips out support for the dtor pointer from kmem_cache_create()
      completely and fixes up every single callsite in the kernel (there were
      about 224, not including the slab allocator definitions themselves,
      or the documentation references).
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      20c2df83
  5. 19 7月, 2007 9 次提交
  6. 17 5月, 2007 1 次提交
    • C
      Remove SLAB_CTOR_CONSTRUCTOR · a35afb83
      Christoph Lameter 提交于
      SLAB_CTOR_CONSTRUCTOR is always specified. No point in checking it.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Cc: Steven French <sfrench@us.ibm.com>
      Cc: Michael Halcrow <mhalcrow@us.ibm.com>
      Cc: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
      Cc: Miklos Szeredi <miklos@szeredi.hu>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Roman Zippel <zippel@linux-m68k.org>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Dave Kleikamp <shaggy@austin.ibm.com>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Cc: Anton Altaparmakov <aia21@cantab.net>
      Cc: Mark Fasheh <mark.fasheh@oracle.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Jan Kara <jack@ucw.cz>
      Cc: David Chinner <dgc@sgi.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a35afb83
  7. 11 5月, 2007 1 次提交
    • J
      locks: fix F_GETLK regression (failure to find conflicts) · 129a84de
      J. Bruce Fields 提交于
      In 9d6a8c5c we changed posix_test_lock
      to modify its single file_lock argument instead of taking separate input
      and output arguments.  This makes it no longer safe to set the output
      lock's fl_type to F_UNLCK before looking for a conflict, since that
      means searching for a conflict against a lock with type F_UNLCK.
      
      This fixes a regression which causes F_GETLK to incorrectly report no
      conflict on most filesystems (including any filesystem that doesn't do
      its own locking).
      
      Also fix posix_lock_to_flock() to copy the lock type.  This isn't
      strictly necessary, since the caller already does this; but it seems
      less likely to cause confusion in the future.
      
      Thanks to Doug Chapman for the bug report.
      Signed-off-by: N"J. Bruce Fields" <bfields@citi.umich.edu>
      Acked-by: NDoug Chapman <doug.chapman@hp.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      129a84de
  8. 08 5月, 2007 1 次提交
    • C
      slab allocators: Remove SLAB_DEBUG_INITIAL flag · 50953fe9
      Christoph Lameter 提交于
      I have never seen a use of SLAB_DEBUG_INITIAL.  It is only supported by
      SLAB.
      
      I think its purpose was to have a callback after an object has been freed
      to verify that the state is the constructor state again?  The callback is
      performed before each freeing of an object.
      
      I would think that it is much easier to check the object state manually
      before the free.  That also places the check near the code object
      manipulation of the object.
      
      Also the SLAB_DEBUG_INITIAL callback is only performed if the kernel was
      compiled with SLAB debugging on.  If there would be code in a constructor
      handling SLAB_DEBUG_INITIAL then it would have to be conditional on
      SLAB_DEBUG otherwise it would just be dead code.  But there is no such code
      in the kernel.  I think SLUB_DEBUG_INITIAL is too problematic to make real
      use of, difficult to understand and there are easier ways to accomplish the
      same effect (i.e.  add debug code before kfree).
      
      There is a related flag SLAB_CTOR_VERIFY that is frequently checked to be
      clear in fs inode caches.  Remove the pointless checks (they would even be
      pointless without removeal of SLAB_DEBUG_INITIAL) from the fs constructors.
      
      This is the last slab flag that SLUB did not support.  Remove the check for
      unimplemented flags from SLUB.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      50953fe9
  9. 07 5月, 2007 7 次提交
    • M
      locks: add fl_grant callback for asynchronous lock return · 2beb6614
      Marc Eshel 提交于
      Acquiring a lock on a cluster filesystem may require communication with
      remote hosts, and to avoid blocking lockd or nfsd threads during such
      communication, we allow the results to be returned asynchronously.
      
      When a ->lock() call needs to block, the file system will return
      -EINPROGRESS, and then later return the results with a call to the
      routine in the fl_grant field of the lock_manager_operations struct.
      
      This differs from the case when ->lock returns -EAGAIN to a blocking
      lock request; in that case, the filesystem calls fl_notify when the lock
      is granted, and the caller retries the original lock.  So while
      fl_notify is merely a hint to the caller that it should retry, fl_grant
      actually communicates the final result of the lock operation (with the
      lock already acquired in the succesful case).
      
      Therefore fl_grant takes a lock, a status and, for the test lock case, a
      conflicting lock.  We also allow fl_grant to return an error to the
      filesystem, to handle the case where the fl_grant requests arrives after
      the lock manager has already given up waiting for it.
      Signed-off-by: NMarc Eshel <eshel@almaden.ibm.com>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      2beb6614
    • M
      locks: add lock cancel command · 9b9d2ab4
      Marc Eshel 提交于
      Lock managers need to be able to cancel pending lock requests.  In the case
      where the exported filesystem manages its own locks, it's not sufficient just
      to call posix_unblock_lock(); we need to let the filesystem know what's
      happening too.
      
      We do this by adding a new fcntl lock command: FL_CANCELLK.  Some day this
      might also be made available to userspace applications that could benefit from
      an asynchronous locking api.
      Signed-off-by: NMarc Eshel <eshel@almaden.ibm.com>
      Signed-off-by: N"J. Bruce Fields" <bfields@citi.umich.edu>
      9b9d2ab4
    • M
      locks: allow {vfs,posix}_lock_file to return conflicting lock · 150b3934
      Marc Eshel 提交于
      The nfsv4 protocol's lock operation, in the case of a conflict, returns
      information about the conflicting lock.
      
      It's unclear how clients can use this, so for now we're not going so far as to
      add a filesystem method that can return a conflicting lock, but we may as well
      return something in the local case when it's easy to.
      Signed-off-by: NMarc Eshel <eshel@almaden.ibm.com>
      Signed-off-by: N"J. Bruce Fields" <bfields@citi.umich.edu>
      150b3934
    • M
      locks: factor out generic/filesystem switch from setlock code · 7723ec97
      Marc Eshel 提交于
      Factor out the code that switches between generic and filesystem-specific lock
      methods; eventually we want to call this from lock managers (lockd and nfsd)
      too; currently they only call the generic methods.
      
      This patch does that for all the setlk code.
      Signed-off-by: NMarc Eshel <eshel@almaden.ibm.com>
      Signed-off-by: N"J. Bruce Fields" <bfields@citi.umich.edu>
      7723ec97
    • J
      locks: factor out generic/filesystem switch from test_lock · 3ee17abd
      J. Bruce Fields 提交于
      Factor out the code that switches between generic and filesystem-specific lock
      methods; eventually we want to call this from lock managers (lockd and nfsd)
      too; currently they only call the generic methods.
      
      This patch does that for test_lock.
      
      Note that this hasn't been necessary until recently, because the few
      filesystems that define ->lock() (nfs, cifs...) aren't exportable via NFS.
      However GFS (and, in the future, other cluster filesystems) need to implement
      their own locking to get cluster-coherent locking, and also want to be able to
      export locking to NFS (lockd and NFSv4).
      
      So we accomplish this by factoring out code such as this and exporting it for
      the use of lockd and nfsd.
      Signed-off-by: N"J. Bruce Fields" <bfields@citi.umich.edu>
      3ee17abd
    • M
      locks: give posix_test_lock same interface as ->lock · 9d6a8c5c
      Marc Eshel 提交于
      posix_test_lock() and ->lock() do the same job but have gratuitously
      different interfaces.  Modify posix_test_lock() so the two agree,
      simplifying some code in the process.
      Signed-off-by: NMarc Eshel <eshel@almaden.ibm.com>
      Signed-off-by: N"J. Bruce Fields" <bfields@citi.umich.edu>
      9d6a8c5c
    • J
      locks: make ->lock release private data before returning in GETLK case · 70cc6487
      J. Bruce Fields 提交于
      The file_lock argument to ->lock is used to return the conflicting lock
      when found.  There's no reason for the filesystem to return any private
      information with this conflicting lock, but nfsv4 is.
      
      Fix nfsv4 client, and modify locks.c to stop calling fl_release_private
      for it in this case.
      Signed-off-by: N"J. Bruce Fields" <bfields@citi.umich.edu>
      Cc: "Trond Myklebust" <Trond.Myklebust@netapp.com>"
      70cc6487
  10. 17 4月, 2007 2 次提交
  11. 09 12月, 2006 1 次提交
  12. 08 12月, 2006 2 次提交
  13. 02 10月, 2006 1 次提交
  14. 01 10月, 2006 1 次提交
  15. 15 8月, 2006 1 次提交
  16. 06 7月, 2006 1 次提交