1. 07 12月, 2010 1 次提交
  2. 29 10月, 2010 1 次提交
    • S
      NTLM auth and sign - Use appropriate server challenge · d3ba50b1
      Shirish Pargaonkar 提交于
      Need to have cryptkey or server challenge in smb connection
      (struct TCP_Server_Info) for ntlm and ntlmv2 auth types for which
      cryptkey (Encryption Key) is supplied just once in Negotiate Protocol
      response during an smb connection setup for all the smb sessions over
      that smb connection.
      
      For ntlmssp, cryptkey or server challenge is provided for every
      smb session in type 2 packet of ntlmssp negotiation, the cryptkey
      provided during Negotiation Protocol response before smb connection
      does not count.
      
      Rename cryptKey to cryptkey and related changes.
      Signed-off-by: NShirish Pargaonkar <shirishpargaonkar@gmail.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      d3ba50b1
  3. 21 10月, 2010 1 次提交
    • S
      cifs: convert cifs_tcp_ses_lock from a rwlock to a spinlock · 3f9bcca7
      Suresh Jayaraman 提交于
      cifs_tcp_ses_lock is a rwlock with protects the cifs_tcp_ses_list,
      server->smb_ses_list and the ses->tcon_list. It also protects a few
      ref counters in server, ses and tcon. In most cases the critical section
      doesn't seem to be large, in a few cases where it is slightly large, there
      seem to be really no benefit from concurrent access. I briefly considered RCU
      mechanism but it appears to me that there is no real need.
      
      Replace it with a spinlock and get rid of the last rwlock in the cifs code.
      Signed-off-by: NSuresh Jayaraman <sjayaraman@suse.de>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      3f9bcca7
  4. 18 10月, 2010 1 次提交
    • J
      cifs: convert GlobalSMBSeslock from a rwlock to regular spinlock · 4477288a
      Jeff Layton 提交于
      Convert this lock to a regular spinlock
      
      A rwlock_t offers little value here. It's more expensive than a regular
      spinlock unless you have a fairly large section of code that runs under
      the read lock and can benefit from the concurrency.
      
      Additionally, we need to ensure that the refcounting for files isn't
      racy and to do that we need to lock areas that can increment it for
      write. That means that the areas that can actually use a read_lock are
      very few and relatively infrequently used.
      
      While we're at it, change the name to something easier to type, and fix
      a bug in find_writable_file. cifsFileInfo_put can sleep and shouldn't be
      called while holding the lock.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Reviewed-by: NSuresh Jayaraman <sjayaraman@suse.de>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      4477288a
  5. 15 10月, 2010 1 次提交
  6. 02 10月, 2010 1 次提交
    • J
      cifs: prevent infinite recursion in cifs_reconnect_tcon · f569599a
      Jeff Layton 提交于
      cifs_reconnect_tcon is called from smb_init. After a successful
      reconnect, cifs_reconnect_tcon will call reset_cifs_unix_caps. That
      function will, in turn call CIFSSMBQFSUnixInfo and CIFSSMBSetFSUnixInfo.
      Those functions also call smb_init.
      
      It's possible for the session and tcon reconnect to succeed, and then
      for another cifs_reconnect to occur before CIFSSMBQFSUnixInfo or
      CIFSSMBSetFSUnixInfo to be called. That'll cause those functions to call
      smb_init and cifs_reconnect_tcon again, ad infinitum...
      
      Break the infinite recursion by having those functions use a new
      smb_init variant that doesn't attempt to perform a reconnect.
      Reported-and-Tested-by: NMichal Suchanek <hramrach@centrum.cz>
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      f569599a
  7. 30 9月, 2010 1 次提交
    • S
      cifs NTLMv2/NTLMSSP ntlmv2 within ntlmssp autentication code · 2b149f11
      Shirish Pargaonkar 提交于
      Attribue Value (AV) pairs or Target Info (TI) pairs are part of
      ntlmv2 authentication.
      Structure ntlmv2_resp had only definition for two av pairs.
      So removed it, and now allocation of av pairs is dynamic.
      For servers like Windows 7/2008, av pairs sent by server in
      challege packet (type 2 in the ntlmssp exchange/negotiation) can
      vary.
      
      Server sends them during ntlmssp negotiation. So when ntlmssp is used
      as an authentication mechanism, type 2 challenge packet from server
      has this information.  Pluck it and use the entire blob for
      authenticaiton purpose.  If user has not specified, extract
      (netbios) domain name from the av pairs which is used to calculate
      ntlmv2 hash.  Servers like Windows 7 are particular about the AV pair
      blob.
      
      Servers like Windows 2003, are not very strict about the contents
      of av pair blob used during ntlmv2 authentication.
      So when security mechanism such as ntlmv2 is used (not ntlmv2 in ntlmssp),
      there is no negotiation and so genereate a minimal blob that gets
      used in ntlmv2 authentication as well as gets sent.
      
      Fields tilen and tilbob are session specific.  AV pair values are defined.
      
      To calculate ntlmv2 response we need ti/av pair blob.
      
      For sec mech like ntlmssp, the blob is plucked from type 2 response from
      the server.  From this blob, netbios name of the domain is retrieved,
      if user has not already provided, to be included in the Target String
      as part of ntlmv2 hash calculations.
      
      For sec mech like ntlmv2, create a minimal, two av pair blob.
      
      The allocated blob is freed in case of error.  In case there is no error,
      this blob is used in calculating ntlmv2 response (in CalcNTLMv2_response)
      and is also copied on the response to the server, and then freed.
      
      The type 3 ntlmssp response is prepared on a buffer,
      5 * sizeof of struct _AUTHENTICATE_MESSAGE, an empirical value large
      enough to hold _AUTHENTICATE_MESSAGE plus a blob with max possible
      10 values as part of ntlmv2 response and lmv2 keys and domain, user,
      workstation  names etc.
      
      Also, kerberos gets selected as a default mechanism if server supports it,
      over the other security mechanisms.
      Signed-off-by: NShirish Pargaonkar <shirishpargaonkar@gmail.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      2b149f11
  8. 09 9月, 2010 1 次提交
  9. 21 8月, 2010 1 次提交
  10. 06 5月, 2010 2 次提交
  11. 27 4月, 2010 2 次提交
  12. 21 4月, 2010 2 次提交
    • S
      f19159dc
    • J
      [CIFS] Neaten cERROR and cFYI macros, reduce text space · b6b38f70
      Joe Perches 提交于
      Neaten cERROR and cFYI macros, reduce text space
      ~2.5K
      
      Convert '__FILE__ ": " fmt' to '"%s: " fmt', __FILE__' to save text space
      Surround macros with do {} while
      Add parentheses to macros
      Make statement expression macro from macro with assign
      Remove now unnecessary parentheses from cFYI and cERROR uses
      
      defconfig with CIFS support old
      $ size fs/cifs/built-in.o
         text	   data	    bss	    dec	    hex	filename
       156012	   1760	    148	 157920	  268e0	fs/cifs/built-in.o
      
      defconfig with CIFS support old
      $ size fs/cifs/built-in.o
         text	   data	    bss	    dec	    hex	filename
       153508	   1760	    148	 155416	  25f18	fs/cifs/built-in.o
      
      allyesconfig old:
      $ size fs/cifs/built-in.o
         text	   data	    bss	    dec	    hex	filename
       309138	   3864	  74824	 387826	  5eaf2	fs/cifs/built-in.o
      
      allyesconfig new
      $ size fs/cifs/built-in.o
         text	   data	    bss	    dec	    hex	filename
       305655	   3864	  74824	 384343	  5dd57	fs/cifs/built-in.o
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      b6b38f70
  13. 07 4月, 2010 1 次提交
  14. 04 4月, 2010 2 次提交
  15. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  16. 15 3月, 2010 1 次提交
  17. 06 3月, 2010 2 次提交
  18. 25 2月, 2010 2 次提交
  19. 24 2月, 2010 6 次提交
  20. 09 2月, 2010 1 次提交
  21. 25 9月, 2009 1 次提交
    • J
      cifs: convert oplock breaks to use slow_work facility (try #4) · 3bc303c2
      Jeff Layton 提交于
      This is the fourth respin of the patch to convert oplock breaks to
      use the slow_work facility.
      
      A customer of ours was testing a backport of one of the earlier
      patchsets, and hit a "Busy inodes after umount..." problem. An oplock
      break job had raced with a umount, and the superblock got torn down and
      its memory reused. When the oplock break job tried to dereference the
      inode->i_sb, the kernel oopsed.
      
      This patchset has the oplock break job hold an inode and vfsmount
      reference until the oplock break completes.  With this, there should be
      no need to take a tcon reference (the vfsmount implicitly holds one
      already).
      
      Currently, when an oplock break comes in there's a chance that the
      oplock break job won't occur if the allocation of the oplock_q_entry
      fails. There are also some rather nasty races in the allocation and
      handling these structs.
      
      Rather than allocating oplock queue entries when an oplock break comes
      in, add a few extra fields to the cifsFileInfo struct. Get rid of the
      dedicated cifs_oplock_thread as well and queue the oplock break job to
      the slow_work thread pool.
      
      This approach also has the advantage that the oplock break jobs can
      potentially run in parallel rather than be serialized like they are
      today.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      3bc303c2
  22. 03 9月, 2009 1 次提交
  23. 31 8月, 2009 1 次提交
  24. 10 7月, 2009 4 次提交
  25. 25 6月, 2009 2 次提交