1. 23 5月, 2011 2 次提交
  2. 19 5月, 2011 2 次提交
    • J
      cifs: keep BCC in little-endian format · 820a803f
      Jeff Layton 提交于
      This is the same patch as originally posted, just with some merge
      conflicts fixed up...
      
      Currently, the ByteCount is usually converted to host-endian on receive.
      This is confusing however, as we need to keep two sets of routines for
      accessing it, and keep track of when to use each routine. Munging
      received packets like this also limits when the signature can be
      calulated.
      
      Simplify the code by keeping the received ByteCount in little-endian
      format. This allows us to eliminate a set of routines for accessing it
      and we can now drop the *_le suffixes from the accessor functions since
      that's now implied.
      
      While we're at it, switch all of the places that read the ByteCount
      directly to use the get_bcc inline which should also clean up some
      unaligned accesses.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      820a803f
    • S
      consistently use smb_buf_length as be32 for cifs (try 3) · be8e3b00
      Steve French 提交于
             There is one big endian field in the cifs protocol, the RFC1001
             length, which cifs code (unlike in the smb2 code) had been handling as
             u32 until the last possible moment, when it was converted to be32 (its
             native form) before sending on the wire.   To remove the last sparse
             endian warning, and to make this consistent with the smb2
             implementation  (which always treats the fields in their
             native size and endianness), convert all uses of smb_buf_length to
             be32.
      
             This version incorporates Christoph's comment about
             using be32_add_cpu, and fixes a typo in the second
             version of the patch.
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      Signed-off-by: NPavel Shilovsky <piastry@etersoft.ru>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      be8e3b00
  3. 11 2月, 2011 1 次提交
    • J
      cifs: don't always drop malformed replies on the floor (try #3) · 71823baf
      Jeff Layton 提交于
      Slight revision to this patch...use min_t() instead of conditional
      assignment. Also, remove the FIXME comment and replace it with the
      explanation that Steve gave earlier.
      
      After receiving a packet, we currently check the header. If it's no
      good, then we toss it out and continue the loop, leaving the caller
      waiting on that response.
      
      In cases where the packet has length inconsistencies, but the MID is
      valid, this leads to unneeded delays. That's especially problematic now
      that the client waits indefinitely for responses.
      
      Instead, don't immediately discard the packet if checkSMB fails. Try to
      find a matching mid_q_entry, mark it as having a malformed response and
      issue the callback.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      71823baf
  4. 05 2月, 2011 1 次提交
  5. 31 1月, 2011 3 次提交
  6. 21 1月, 2011 9 次提交
  7. 20 1月, 2011 4 次提交
  8. 07 1月, 2011 1 次提交
  9. 27 10月, 2010 1 次提交
    • S
      NTLM auth and sign - Allocate session key/client response dynamically · 21e73393
      Shirish Pargaonkar 提交于
      Start calculating auth response within a session.  Move/Add pertinet
      data structures like session key, server challenge and ntlmv2_hash in
      a session structure.  We should do the calculations within a session
      before copying session key and response over to server data
      structures because a session setup can fail.
      
      Only after a very first smb session succeeds, it copy/make its
      session key, session key of smb connection.  This key stays with
      the smb connection throughout its life.
      sequence_number within server is set to 0x2.
      
      The authentication Message Authentication Key (mak) which consists
      of session key followed by client response within structure session_key
      is now dynamic.  Every authentication type allocates the key + response
      sized memory within its session structure and later either assigns or
      frees it once the client response is sent and if session's session key
      becomes connetion's session key.
      
      ntlm/ntlmi authentication functions are rearranged.  A function
      named setup_ntlm_resp(), similar to setup_ntlmv2_resp(), replaces
      function cifs_calculate_session_key().
      
      size of CIFS_SESS_KEY_SIZE is changed to 16, to reflect the byte size
      of the key it holds.
      Reviewed-by: NJeff Layton <jlayton@samba.org>
      Signed-off-by: NShirish Pargaonkar <shirishpargaonkar@gmail.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      21e73393
  10. 30 9月, 2010 1 次提交
  11. 09 9月, 2010 1 次提交
  12. 21 8月, 2010 1 次提交
  13. 06 5月, 2010 1 次提交
  14. 21 4月, 2010 1 次提交
    • J
      [CIFS] Neaten cERROR and cFYI macros, reduce text space · b6b38f70
      Joe Perches 提交于
      Neaten cERROR and cFYI macros, reduce text space
      ~2.5K
      
      Convert '__FILE__ ": " fmt' to '"%s: " fmt', __FILE__' to save text space
      Surround macros with do {} while
      Add parentheses to macros
      Make statement expression macro from macro with assign
      Remove now unnecessary parentheses from cFYI and cERROR uses
      
      defconfig with CIFS support old
      $ size fs/cifs/built-in.o
         text	   data	    bss	    dec	    hex	filename
       156012	   1760	    148	 157920	  268e0	fs/cifs/built-in.o
      
      defconfig with CIFS support old
      $ size fs/cifs/built-in.o
         text	   data	    bss	    dec	    hex	filename
       153508	   1760	    148	 155416	  25f18	fs/cifs/built-in.o
      
      allyesconfig old:
      $ size fs/cifs/built-in.o
         text	   data	    bss	    dec	    hex	filename
       309138	   3864	  74824	 387826	  5eaf2	fs/cifs/built-in.o
      
      allyesconfig new
      $ size fs/cifs/built-in.o
         text	   data	    bss	    dec	    hex	filename
       305655	   3864	  74824	 384343	  5dd57	fs/cifs/built-in.o
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      b6b38f70
  15. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  16. 25 9月, 2009 1 次提交
    • J
      cifs: convert oplock breaks to use slow_work facility (try #4) · 3bc303c2
      Jeff Layton 提交于
      This is the fourth respin of the patch to convert oplock breaks to
      use the slow_work facility.
      
      A customer of ours was testing a backport of one of the earlier
      patchsets, and hit a "Busy inodes after umount..." problem. An oplock
      break job had raced with a umount, and the superblock got torn down and
      its memory reused. When the oplock break job tried to dereference the
      inode->i_sb, the kernel oopsed.
      
      This patchset has the oplock break job hold an inode and vfsmount
      reference until the oplock break completes.  With this, there should be
      no need to take a tcon reference (the vfsmount implicitly holds one
      already).
      
      Currently, when an oplock break comes in there's a chance that the
      oplock break job won't occur if the allocation of the oplock_q_entry
      fails. There are also some rather nasty races in the allocation and
      handling these structs.
      
      Rather than allocating oplock queue entries when an oplock break comes
      in, add a few extra fields to the cifsFileInfo struct. Get rid of the
      dedicated cifs_oplock_thread as well and queue the oplock break job to
      the slow_work thread pool.
      
      This approach also has the advantage that the oplock break jobs can
      potentially run in parallel rather than be serialized like they are
      today.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      3bc303c2
  17. 02 9月, 2009 1 次提交
  18. 29 1月, 2009 2 次提交
  19. 26 12月, 2008 6 次提交