“b6e96f27aef741bfe6771e6b7fd3d5f9d4516d42”上不存在“src/operators/math/depthwise_conv3x3_int8.h”
  1. 02 12月, 2008 7 次提交
  2. 28 11月, 2008 1 次提交
    • J
      udf: Fix BUG_ON() in destroy_inode() · 52b19ac9
      Jan Kara 提交于
      udf_clear_inode() can leave behind buffers on mapping's i_private list (when
      we truncated preallocation). Call invalidate_inode_buffers() so that the list
      is properly cleaned-up before we return from udf_clear_inode(). This is ugly
      and suggest that we should cleanup preallocation earlier than in clear_inode()
      but currently there's no such call available since drop_inode() is called under
      inode lock and thus is unusable for disk operations.
      Signed-off-by: NJan Kara <jack@suse.cz>
      52b19ac9
  3. 27 11月, 2008 1 次提交
    • J
      [CIFS] fix regression in cifs_write_begin/cifs_write_end · a98ee8c1
      Jeff Layton 提交于
      The conversion to write_begin/write_end interfaces had a bug where we
      were passing a bad parameter to cifs_readpage_worker. Rather than
      passing the page offset of the start of the write, we needed to pass the
      offset of the beginning of the page. This was reliably showing up as
      data corruption in the fsx-linux test from LTP.
      
      It also became evident that this code was occasionally doing unnecessary
      read calls. Optimize those away by using the PG_checked flag to indicate
      that the unwritten part of the page has been initialized.
      
      CC: Nick Piggin <npiggin@suse.de>
      Acked-by: NDave Kleikamp <shaggy@us.ibm.com>
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      a98ee8c1
  4. 25 11月, 2008 3 次提交
    • C
      NLM: client-side nlm_lookup_host() should avoid matching on srcaddr · a8d82d9b
      Chuck Lever 提交于
      Since commit c98451bd, the loop in nlm_lookup_host() unconditionally
      compares the host's h_srcaddr field to the incoming source address.
      For client-side nlm_host entries, both are always AF_UNSPEC, so this
      check is unnecessary.
      
      Since commit 781b61a6, which added support for AF_INET6 addresses to
      nlm_cmp_addr(), nlm_cmp_addr() now returns FALSE for AF_UNSPEC
      addresses, which causes nlm_lookup_host() to create a fresh nlm_host
      entry every time it is called on the client.
      
      These extra entries will eventually expire once the server is
      unmounted, so the impact of this regression, introduced with lockd
      IPv6 support in 2.6.28, should be minor.
      
      We could fix this by adding an arm in nlm_cmp_addr() for AF_UNSPEC
      addresses, but really, nlm_lookup_host() shouldn't be matching on the
      srcaddr field for client-side nlm_host lookups.
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      a8d82d9b
    • J
      nfsd: use of unitialized list head on error exit in nfs4recover.c · e4625eb8
      J. Bruce Fields 提交于
      Thanks to Matthew Dodd for this bug report:
      
      A file label issue while running SELinux in MLS mode provoked the
      following bug, which is a result of use before init on a 'struct list_head'.
      
      In nfsd4_list_rec_dir() if the call to dentry_open() fails the 'goto
      out' skips INIT_LIST_HEAD() which results in the normally improbable
      case where list_entry() returns NULL.
      
      Trace follows.
      
      NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
      SELinux:  Context unconfined_t:object_r:var_lib_nfs_t:s0 is not valid
      (left unmapped).
      type=1400 audit(1227298063.609:282): avc:  denied  { read } for
      pid=1890 comm="rpc.nfsd" name="v4recovery" dev=dm-0 ino=148726
      scontext=system_u:system_r:nfsd_t:s0-s15:c0.c1023
      tcontext=system_u:object_r:unlabeled_t:s15:c0.c1023 tclass=dir
      BUG: unable to handle kernel NULL pointer dereference at 00000004
      IP: [<c050894e>] list_del+0x6/0x60
      *pde = 0d9ce067 *pte = 00000000
      Oops: 0000 [#1] SMP
      Modules linked in: nfsd lockd nfs_acl auth_rpcgss exportfs autofs4
      sunrpc ipv6 dm_multipath scsi_dh ppdev parport_pc sg parport floppy
      ata_piix pata_acpi ata_generic libata pcnet32 i2c_piix4 mii pcspkr
      i2c_core dm_snapshot dm_zero dm_mirror dm_log dm_mod BusLogic sd_mod
      scsi_mod crc_t10dif ext3 jbd mbcache uhci_hcd ohci_hcd ehci_hcd [last
      unloaded: microcode]
      
      Pid: 1890, comm: rpc.nfsd Not tainted (2.6.27.5-37.fc9.i686 #1)
      EIP: 0060:[<c050894e>] EFLAGS: 00010217 CPU: 0
      EIP is at list_del+0x6/0x60
      EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: cd99e480
      ESI: cf9caed8 EDI: 00000000 EBP: cf9caebc ESP: cf9caeb8
        DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
      Process rpc.nfsd (pid: 1890, ti=cf9ca000 task=cf4de580 task.ti=cf9ca000)
      Stack: 00000000 cf9caef0 d0a9f139 c0496d04 d0a9f217 fffffff3 00000000
      00000000
              00000000 00000000 cf32b220 00000000 00000008 00000801 cf9caefc
      d0a9f193
              00000000 cf9caf08 d0a9b6ea 00000000 cf9caf1c d0a874f2 cf9c3004
      00000008
      Call Trace:
        [<d0a9f139>] ? nfsd4_list_rec_dir+0xf3/0x13a [nfsd]
        [<c0496d04>] ? do_path_lookup+0x12d/0x175
        [<d0a9f217>] ? load_recdir+0x0/0x26 [nfsd]
        [<d0a9f193>] ? nfsd4_recdir_load+0x13/0x34 [nfsd]
        [<d0a9b6ea>] ? nfs4_state_start+0x2a/0xc5 [nfsd]
        [<d0a874f2>] ? nfsd_svc+0x51/0xff [nfsd]
        [<d0a87f2d>] ? write_svc+0x0/0x1e [nfsd]
        [<d0a87f48>] ? write_svc+0x1b/0x1e [nfsd]
        [<d0a87854>] ? nfsctl_transaction_write+0x3a/0x61 [nfsd]
        [<c04b6a4e>] ? sys_nfsservctl+0x116/0x154
        [<c04975c1>] ? putname+0x24/0x2f
        [<c04975c1>] ? putname+0x24/0x2f
        [<c048d49f>] ? do_sys_open+0xad/0xb7
        [<c048d337>] ? filp_close+0x50/0x5a
        [<c048d4eb>] ? sys_open+0x1e/0x26
        [<c0403cca>] ? syscall_call+0x7/0xb
        [<c064007b>] ? init_cyrix+0x185/0x490
        =======================
      Code: 75 e1 8b 53 08 8d 4b 04 8d 46 04 e8 75 00 00 00 8b 53 10 8d 4b 0c
      8d 46 0c e8 67 00 00 00 5b 5e 5f 5d c3 90 90 55 89 e5 53 89 c3 <8b> 40
      04 8b 00 39 d8 74 16 50 53 68 3e d6 6f c0 6a 30 68 78 d6
      EIP: [<c050894e>] list_del+0x6/0x60 SS:ESP 0068:cf9caeb8
      ---[ end trace a89c4ad091c4ad53 ]---
      
      Cc: Matthew N. Dodd <Matthew.Dodd@spart.com>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      e4625eb8
    • J
      nfsd: clean up grace period on early exit · 2c5e7615
      J. Bruce Fields 提交于
      If nfsd was shut down before the grace period ended, we could end up
      with a freed object still on grace_list.  Thanks to Jeff Moyer for
      reporting the resulting list corruption warnings.
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      Tested-by: NJeff Moyer <jmoyer@redhat.com>
      2c5e7615
  5. 22 11月, 2008 3 次提交
    • A
      UBIFS: pre-allocate bulk-read buffer · 3477d204
      Artem Bityutskiy 提交于
      To avoid memory allocation failure during bulk-read, pre-allocate
      a bulk-read buffer, so that if there is only one bulk-reader at
      a time, it would just use the pre-allocated buffer and would not
      do any memory allocation. However, if there are more than 1 bulk-
      reader, then only one reader would use the pre-allocated buffer,
      while the other reader would allocate the buffer for itself.
      Signed-off-by: NArtem Bityutskiy <Artem.Bityutskiy@nokia.com>
      3477d204
    • A
      UBIFS: do not allocate too much · 6c0c42cd
      Artem Bityutskiy 提交于
      Bulk-read allocates 128KiB or more using kmalloc. The allocation
      starts failing often when the memory gets fragmented. UBIFS still
      works fine in this case, because it falls-back to standard
      (non-optimized) read method, though. This patch teaches bulk-read
      to allocate exactly the amount of memory it needs, instead of
      allocating 128KiB every time.
      
      This patch is also a preparation to the further fix where we'll
      have a pre-allocated bulk-read buffer as well. For example, now
      the @bu object is prepared in 'ubifs_bulk_read()', so we could
      path either pre-allocated or allocated information to
      'ubifs_do_bulk_read()' later. Or teaching 'ubifs_do_bulk_read()'
      not to allocate 'bu->buf' if it is already there.
      Signed-off-by: NArtem Bityutskiy <Artem.Bityutskiy@nokia.com>
      6c0c42cd
    • A
      UBIFS: do not print scary memory allocation warnings · 39ce81ce
      Artem Bityutskiy 提交于
      Bulk-read allocates a lot of memory with 'kmalloc()', and when it
      is/gets fragmented 'kmalloc()' fails with a scarry warning. But
      because bulk-read is just an optimization, UBIFS keeps working fine.
      Supress the warning by passing __GFP_NOWARN option to 'kmalloc()'.
      
      This patch also introduces a macro for the magic 128KiB constant.
      This is just neater.
      
      Note, this is not really fixes the problem we had, but just hides
      the warnings. The further patches fix the problem.
      Signed-off-by: NArtem Bityutskiy <Artem.Bityutskiy@nokia.com>
      39ce81ce
  6. 21 11月, 2008 1 次提交
    • S
      [CIFS] Do not attempt to close invalidated file handles · ddb4cbfc
      Steve French 提交于
      If a connection with open file handles has gone down
      and come back up and reconnected without reopening
      the file handle yet, do not attempt to send an SMB close
      request for this handle in cifs_close.  We were
      checking for the connection being invalid in cifs_close
      but since the connection may have been reconnected
      we also need to check whether the file handle
      was marked invalid (otherwise we could close the
      wrong file handle by accident).
      Acked-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      ddb4cbfc
  7. 20 11月, 2008 3 次提交
  8. 19 11月, 2008 1 次提交
  9. 18 11月, 2008 6 次提交
  10. 17 11月, 2008 3 次提交
  11. 16 11月, 2008 1 次提交
    • A
      Fix inotify watch removal/umount races · 8f7b0ba1
      Al Viro 提交于
      Inotify watch removals suck violently.
      
      To kick the watch out we need (in this order) inode->inotify_mutex and
      ih->mutex.  That's fine if we have a hold on inode; however, for all
      other cases we need to make damn sure we don't race with umount.  We can
      *NOT* just grab a reference to a watch - inotify_unmount_inodes() will
      happily sail past it and we'll end with reference to inode potentially
      outliving its superblock.
      
      Ideally we just want to grab an active reference to superblock if we
      can; that will make sure we won't go into inotify_umount_inodes() until
      we are done.  Cleanup is just deactivate_super().
      
      However, that leaves a messy case - what if we *are* racing with
      umount() and active references to superblock can't be acquired anymore?
      We can bump ->s_count, grab ->s_umount, which will almost certainly wait
      until the superblock is shut down and the watch in question is pining
      for fjords.  That's fine, but there is a problem - we might have hit the
      window between ->s_active getting to 0 / ->s_count - below S_BIAS (i.e.
      the moment when superblock is past the point of no return and is heading
      for shutdown) and the moment when deactivate_super() acquires
      ->s_umount.
      
      We could just do drop_super() yield() and retry, but that's rather
      antisocial and this stuff is luser-triggerable.  OTOH, having grabbed
      ->s_umount and having found that we'd got there first (i.e.  that
      ->s_root is non-NULL) we know that we won't race with
      inotify_umount_inodes().
      
      So we could grab a reference to watch and do the rest as above, just
      with drop_super() instead of deactivate_super(), right? Wrong.  We had
      to drop ih->mutex before we could grab ->s_umount.  So the watch
      could've been gone already.
      
      That still can be dealt with - we need to save watch->wd, do idr_find()
      and compare its result with our pointer.  If they match, we either have
      the damn thing still alive or we'd lost not one but two races at once,
      the watch had been killed and a new one got created with the same ->wd
      at the same address.  That couldn't have happened in inotify_destroy(),
      but inotify_rm_wd() could run into that.  Still, "new one got created"
      is not a problem - we have every right to kill it or leave it alone,
      whatever's more convenient.
      
      So we can use idr_find(...) == watch && watch->inode->i_sb == sb as
      "grab it and kill it" check.  If it's been our original watch, we are
      fine, if it's a newcomer - nevermind, just pretend that we'd won the
      race and kill the fscker anyway; we are safe since we know that its
      superblock won't be going away.
      
      And yes, this is far beyond mere "not very pretty"; so's the entire
      concept of inotify to start with.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Acked-by: NGreg KH <greg@kroah.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8f7b0ba1
  12. 15 11月, 2008 3 次提交
  13. 14 11月, 2008 4 次提交
    • S
      [CIFS] clean up server protocol handling · 3ec332ef
      Steve French 提交于
      We're currently declaring both a sockaddr_in and sockaddr6_in on the
      stack, but we really only need storage for one of them. Declare a
      sockaddr struct and cast it to the proper type. Also, eliminate the
      protocolType field in the TCP_Server_Info struct. It's redundant since
      we have a sa_family field in the sockaddr anyway.
      
      We may need to revisit this if SCTP is ever implemented, but for now
      this will simplify the code.
      
      CIFS over IPv6 also has a number of problems currently. This fixes all
      of them that I found. Eventually, it would be nice to move more of the
      code to be protocol independent, but this is a start.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      3ec332ef
    • S
      [CIFS] remove unused list, add new cifs sock list to prepare for mount/umount fix · fb396016
      Steve French 提交于
      Also adds two lines missing from the previous patch (for the need reconnect flag in the
      /proc/fs/cifs/DebugData handling)
      
      The new global_cifs_sock_list is added, and initialized in init_cifs but not used yet.
      Jeff Layton will be adding code in to use that and to remove the GlobalTcon and GlobalSMBSession
      lists.
      
      CC: Jeff Layton <jlayton@redhat.com>
      CC: Shirish Pargaonkar <shirishp@us.ibm.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      fb396016
    • S
      [CIFS] Fix cifs reconnection flags · 3b795210
      Steve French 提交于
      In preparation for Jeff's big umount/mount fixes to remove the possibility of
      various races in cifs mount and linked list handling of sessions, sockets and
      tree connections, this patch cleans up some repetitive code in cifs_mount,
      and addresses a problem with ses->status and tcon->tidStatus in which we
      were overloading the "need_reconnect" state with other status in that
      field.  So the "need_reconnect" flag has been broken out from those
      two state fields (need reconnect was not mutually exclusive from some of the
      other possible tid and ses states).  In addition, a few exit cases in
      cifs_mount were cleaned up, and a problem with a tcon flag (for lease support)
      was not being set consistently for the 2nd mount of the same share
      
      CC: Jeff Layton <jlayton@redhat.com>
      CC: Shirish Pargaonkar <shirishp@us.ibm.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      3b795210
    • D
      dlm: fix shutdown cleanup · 278afcbf
      David Teigland 提交于
      Fixes a regression from commit 0f8e0d9a,
      "dlm: allow multiple lockspace creates".
      
      An extraneous 'else' slipped into a code fragment being moved from
      release_lockspace() to dlm_release_lockspace().  The result of the
      unwanted 'else' is that dlm threads and structures are not stopped
      and cleaned up when the final dlm lockspace is removed.  Trying to
      create a new lockspace again afterward will fail with
      "kmem_cache_create: duplicate cache dlm_conn" because the cache
      was not previously destroyed.
      Signed-off-by: NDavid Teigland <teigland@redhat.com>
      278afcbf
  14. 13 11月, 2008 2 次提交
  15. 11 11月, 2008 1 次提交