1. 30 9月, 2010 7 次提交
    • J
      cifs: add cifs_sb_master_tcon and convert some callers to use it · 0d424ad0
      Jeff Layton 提交于
      At mount time, we'll always need to create a tcon that will serve as a
      template for others that are associated with the mount. This tcon is
      known as the "master" tcon.
      
      In some cases, we'll need to use that tcon regardless of who's accessing
      the mount. Add an accessor function for the master tcon and go ahead and
      switch the appropriate places to use it.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      0d424ad0
    • J
    • J
      cifs: add function to get a tcon from cifs_sb · a6e8a845
      Jeff Layton 提交于
      When we convert cifs to do multiple sessions per mount, we'll need more
      than one tcon per superblock. At that point "cifs_sb->tcon" will make
      no sense. Add a new accessor function that gets a tcon given a cifs_sb.
      For now, it just returns cifs_sb->tcon. Later it'll do more.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      a6e8a845
    • J
      cifs: add tcon field to cifsFileInfo struct · 5fe97cfd
      Jeff Layton 提交于
      Eventually, we'll have more than one tcon per superblock. At that point,
      we'll need to know which one is associated with a particular fid. For
      now, this is just set from the cifs_sb->tcon pointer, but eventually
      the caller of cifs_new_fileinfo will pass a tcon pointer in.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      5fe97cfd
    • B
      cifs: Allow binding to local IP address. · 3eb9a889
      Ben Greear 提交于
      When using multi-homed machines, it's nice to be able to specify
      the local IP to use for outbound connections.  This patch gives
      cifs the ability to bind to a particular IP address.
      
         Usage:  mount -t cifs -o srcaddr=192.168.1.50,user=foo, ...
         Usage:  mount -t cifs -o srcaddr=2002::100:1,user=foo, ...
      Acked-by: NJeff Layton <jlayton@redhat.com>
      Acked-by: NDr. David Holder <david.holder@erion.co.uk>
      Signed-off-by: NBen Greear <greearb@candelatech.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      3eb9a889
    • S
      cifs NTLMv2/NTLMSSP ntlmv2 within ntlmssp autentication code · 2b149f11
      Shirish Pargaonkar 提交于
      Attribue Value (AV) pairs or Target Info (TI) pairs are part of
      ntlmv2 authentication.
      Structure ntlmv2_resp had only definition for two av pairs.
      So removed it, and now allocation of av pairs is dynamic.
      For servers like Windows 7/2008, av pairs sent by server in
      challege packet (type 2 in the ntlmssp exchange/negotiation) can
      vary.
      
      Server sends them during ntlmssp negotiation. So when ntlmssp is used
      as an authentication mechanism, type 2 challenge packet from server
      has this information.  Pluck it and use the entire blob for
      authenticaiton purpose.  If user has not specified, extract
      (netbios) domain name from the av pairs which is used to calculate
      ntlmv2 hash.  Servers like Windows 7 are particular about the AV pair
      blob.
      
      Servers like Windows 2003, are not very strict about the contents
      of av pair blob used during ntlmv2 authentication.
      So when security mechanism such as ntlmv2 is used (not ntlmv2 in ntlmssp),
      there is no negotiation and so genereate a minimal blob that gets
      used in ntlmv2 authentication as well as gets sent.
      
      Fields tilen and tilbob are session specific.  AV pair values are defined.
      
      To calculate ntlmv2 response we need ti/av pair blob.
      
      For sec mech like ntlmssp, the blob is plucked from type 2 response from
      the server.  From this blob, netbios name of the domain is retrieved,
      if user has not already provided, to be included in the Target String
      as part of ntlmv2 hash calculations.
      
      For sec mech like ntlmv2, create a minimal, two av pair blob.
      
      The allocated blob is freed in case of error.  In case there is no error,
      this blob is used in calculating ntlmv2 response (in CalcNTLMv2_response)
      and is also copied on the response to the server, and then freed.
      
      The type 3 ntlmssp response is prepared on a buffer,
      5 * sizeof of struct _AUTHENTICATE_MESSAGE, an empirical value large
      enough to hold _AUTHENTICATE_MESSAGE plus a blob with max possible
      10 values as part of ntlmv2 response and lmv2 keys and domain, user,
      workstation  names etc.
      
      Also, kerberos gets selected as a default mechanism if server supports it,
      over the other security mechanisms.
      Signed-off-by: NShirish Pargaonkar <shirishpargaonkar@gmail.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      2b149f11
    • S
      cifs NTLMv2/NTLMSSP Change variable name mac_key to session key to reflect the key it holds · 5f98ca9a
      Shirish Pargaonkar 提交于
      Change name of variable mac_key to session key.
      The reason mac_key was changed to session key is, this structure does not
      hold message authentication code, it holds the session key (for ntlmv2,
      ntlmv1 etc.).  mac is generated as a signature in cifs_calc* functions.
      Signed-off-by: NShirish Pargaonkar <shirishpargaonkar@gmail.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      5f98ca9a
  2. 09 9月, 2010 2 次提交
  3. 25 8月, 2010 1 次提交
  4. 21 8月, 2010 1 次提交
  5. 02 8月, 2010 9 次提交
  6. 23 7月, 2010 1 次提交
    • T
      cifs: use workqueue instead of slow-work · 9b646972
      Tejun Heo 提交于
      Workqueue can now handle high concurrency.  Use system_nrt_wq
      instead of slow-work.
      
      * Updated is_valid_oplock_break() to not call cifs_oplock_break_put()
        as advised by Steve French.  It might cause deadlock.  Instead,
        reference is increased after queueing succeeded and
        cifs_oplock_break() briefly grabs GlobalSMBSeslock before putting
        the cfile to make sure it doesn't put before the matching get is
        finished.
      
      * Anton Blanchard reported that cifs conversion was using now gone
        system_single_wq.  Use system_nrt_wq which provides non-reentrance
        guarantee which is enough and much better.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NSteve French <sfrench@samba.org>
      Cc: Anton Blanchard <anton@samba.org>
      9b646972
  7. 12 5月, 2010 1 次提交
    • J
      cifs: guard against hardlinking directories · 3d694380
      Jeff Layton 提交于
      When we made serverino the default, we trusted that the field sent by the
      server in the "uniqueid" field was actually unique. It turns out that it
      isn't reliably so.
      
      Samba, in particular, will just put the st_ino in the uniqueid field when
      unix extensions are enabled. When a share spans multiple filesystems, it's
      quite possible that there will be collisions. This is a server bug, but
      when the inodes in question are a directory (as is often the case) and
      there is a collision with the root inode of the mount, the result is a
      kernel panic on umount.
      
      Fix this by checking explicitly for directory inodes with the same
      uniqueid. If that is the case, then we can assume that using server inode
      numbers will be a problem and that they should be disabled.
      
      Fixes Samba bugzilla 7407
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      CC: Stable <stable@kernel.org>
      Reviewed-and-Tested-by: NSuresh Jayaraman <sjayaraman@suse.de>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      3d694380
  8. 06 5月, 2010 1 次提交
  9. 27 4月, 2010 2 次提交
  10. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  11. 06 3月, 2010 1 次提交
    • J
      cifs: overhaul cifs_revalidate and rename to cifs_revalidate_dentry · df2cf170
      Jeff Layton 提交于
      cifs_revalidate is renamed to cifs_revalidate_dentry as a later patch
      will add a by-filehandle variant.
      
      Add a new "invalid_mapping" flag to the cifsInodeInfo that indicates
      that the pagecache is considered invalid. Add a new routine to check
      inode attributes whenever they're updated and set that flag if the inode
      has changed on the server.
      
      cifs_revalidate_dentry is then changed to just update the attrcache if
      needed and then to zap the pagecache if it's not valid.
      
      There are some other behavior changes in here as well. Open files are
      now allowed to have their caches invalidated. I see no reason why we'd
      want to keep stale data around just because a file is open. Also,
      cifs_revalidate_cache uses the server_eof for revalidating the file
      size since that should more closely match the size of the file on the
      server.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      df2cf170
  12. 25 2月, 2010 1 次提交
  13. 01 1月, 2010 1 次提交
  14. 04 12月, 2009 1 次提交
  15. 25 9月, 2009 1 次提交
    • J
      cifs: convert oplock breaks to use slow_work facility (try #4) · 3bc303c2
      Jeff Layton 提交于
      This is the fourth respin of the patch to convert oplock breaks to
      use the slow_work facility.
      
      A customer of ours was testing a backport of one of the earlier
      patchsets, and hit a "Busy inodes after umount..." problem. An oplock
      break job had raced with a umount, and the superblock got torn down and
      its memory reused. When the oplock break job tried to dereference the
      inode->i_sb, the kernel oopsed.
      
      This patchset has the oplock break job hold an inode and vfsmount
      reference until the oplock break completes.  With this, there should be
      no need to take a tcon reference (the vfsmount implicitly holds one
      already).
      
      Currently, when an oplock break comes in there's a chance that the
      oplock break job won't occur if the allocation of the oplock_q_entry
      fails. There are also some rather nasty races in the allocation and
      handling these structs.
      
      Rather than allocating oplock queue entries when an oplock break comes
      in, add a few extra fields to the cifsFileInfo struct. Get rid of the
      dedicated cifs_oplock_thread as well and queue the oplock break job to
      the slow_work thread pool.
      
      This approach also has the advantage that the oplock break jobs can
      potentially run in parallel rather than be serialized like they are
      today.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      3bc303c2
  16. 16 9月, 2009 3 次提交
  17. 02 9月, 2009 2 次提交
  18. 10 7月, 2009 3 次提交
  19. 02 7月, 2009 1 次提交
    • J
      cifs: add new cifs_iget function and convert unix codepath to use it · cc0bad75
      Jeff Layton 提交于
      cifs: add new cifs_iget function and convert unix codepath to use it
      
      In order to unify some codepaths, introduce a common cifs_fattr struct
      for storing inode attributes. The different codepaths (unix, legacy,
      normal, etc...) can fill out this struct with inode info. It can then be
      passed as an arg to a common set of routines to get and update inodes.
      
      Add a new cifs_iget function that uses iget5_locked to identify inodes.
      This will compare inodes based on the uniqueid value in a cifs_fattr
      struct.
      
      Rather than filling out an already-created inode, have
      cifs_get_inode_info_unix instead fill out cifs_fattr and hand that off
      to cifs_iget. cifs_iget can then properly look for hardlinked inodes.
      
      On the readdir side, add a new cifs_readdir_lookup function that spawns
      populated dentries. Redefine FILE_UNIX_INFO so that it's basically a
      FILE_UNIX_BASIC_INFO that has a few fields wrapped around it. This
      allows us to more easily use the same function for filling out the fattr
      as the non-readdir codepath.
      
      With this, we should then have proper hardlink detection and can
      eventually get rid of some nasty CIFS-specific hacks for handing them.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      cc0bad75