1. 20 8月, 2010 15 次提交
  2. 19 8月, 2010 12 次提交
  3. 18 8月, 2010 13 次提交
    • T
      NFS: Fix an Oops in the NFSv4 atomic open code · 0a377cff
      Trond Myklebust 提交于
      Adam Lackorzynski reports:
      
      with 2.6.35.2 I'm getting this reproducible Oops:
      
      [  110.825396] BUG: unable to handle kernel NULL pointer dereference at
      (null)
      [  110.828638] IP: [<ffffffff811247b7>] encode_attrs+0x1a/0x2a4
      [  110.828638] PGD be89f067 PUD bf18f067 PMD 0
      [  110.828638] Oops: 0000 [#1] SMP
      [  110.828638] last sysfs file: /sys/class/net/lo/operstate
      [  110.828638] CPU 2
      [  110.828638] Modules linked in: rtc_cmos rtc_core rtc_lib amd64_edac_mod
      i2c_amd756 edac_core i2c_core dm_mirror dm_region_hash dm_log dm_snapshot
      sg sr_mod usb_storage ohci_hcd mptspi tg3 mptscsih mptbase usbcore nls_base
      [last unloaded: scsi_wait_scan]
      [  110.828638]
      [  110.828638] Pid: 11264, comm: setchecksum Not tainted 2.6.35.2 #1
      [  110.828638] RIP: 0010:[<ffffffff811247b7>]  [<ffffffff811247b7>]
      encode_attrs+0x1a/0x2a4
      [  110.828638] RSP: 0000:ffff88003bf5b878  EFLAGS: 00010296
      [  110.828638] RAX: ffff8800bddb48a8 RBX: ffff88003bf5bb18 RCX:
      0000000000000000
      [  110.828638] RDX: ffff8800be258800 RSI: 0000000000000000 RDI:
      ffff88003bf5b9f8
      [  110.828638] RBP: 0000000000000000 R08: ffff8800bddb48a8 R09:
      0000000000000004
      [  110.828638] R10: 0000000000000003 R11: ffff8800be779000 R12:
      ffff8800be258800
      [  110.828638] R13: ffff88003bf5b9f8 R14: ffff88003bf5bb20 R15:
      ffff8800be258800
      [  110.828638] FS:  0000000000000000(0000) GS:ffff880041e00000(0063)
      knlGS:00000000556bd6b0
      [  110.828638] CS:  0010 DS: 002b ES: 002b CR0: 000000008005003b
      [  110.828638] CR2: 0000000000000000 CR3: 00000000be8ef000 CR4:
      00000000000006e0
      [  110.828638] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
      0000000000000000
      [  110.828638] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
      0000000000000400
      [  110.828638] Process setchecksum (pid: 11264, threadinfo
      ffff88003bf5a000, task ffff88003f232210)
      [  110.828638] Stack:
      [  110.828638]  0000000000000000 ffff8800bfbcf920 0000000000000000
      0000000000000ffe
      [  110.828638] <0> 0000000000000000 0000000000000000 0000000000000000
      0000000000000000
      [  110.828638] <0> 0000000000000000 0000000000000000 0000000000000000
      0000000000000000
      [  110.828638] Call Trace:
      [  110.828638]  [<ffffffff81124c1f>] ? nfs4_xdr_enc_setattr+0x90/0xb4
      [  110.828638]  [<ffffffff81371161>] ? call_transmit+0x1c3/0x24a
      [  110.828638]  [<ffffffff813774d9>] ? __rpc_execute+0x78/0x22a
      [  110.828638]  [<ffffffff81371a91>] ? rpc_run_task+0x21/0x2b
      [  110.828638]  [<ffffffff81371b7e>] ? rpc_call_sync+0x3d/0x5d
      [  110.828638]  [<ffffffff8111e284>] ? _nfs4_do_setattr+0x11b/0x147
      [  110.828638]  [<ffffffff81109466>] ? nfs_init_locked+0x0/0x32
      [  110.828638]  [<ffffffff810ac521>] ? ifind+0x4e/0x90
      [  110.828638]  [<ffffffff8111e2fb>] ? nfs4_do_setattr+0x4b/0x6e
      [  110.828638]  [<ffffffff8111e634>] ? nfs4_do_open+0x291/0x3a6
      [  110.828638]  [<ffffffff8111ed81>] ? nfs4_open_revalidate+0x63/0x14a
      [  110.828638]  [<ffffffff811056c4>] ? nfs_open_revalidate+0xd7/0x161
      [  110.828638]  [<ffffffff810a2de4>] ? do_lookup+0x1a4/0x201
      [  110.828638]  [<ffffffff810a4733>] ? link_path_walk+0x6a/0x9d5
      [  110.828638]  [<ffffffff810a42b6>] ? do_last+0x17b/0x58e
      [  110.828638]  [<ffffffff810a5fbe>] ? do_filp_open+0x1bd/0x56e
      [  110.828638]  [<ffffffff811cd5e0>] ? _atomic_dec_and_lock+0x30/0x48
      [  110.828638]  [<ffffffff810a9b1b>] ? dput+0x37/0x152
      [  110.828638]  [<ffffffff810ae063>] ? alloc_fd+0x69/0x10a
      [  110.828638]  [<ffffffff81099f39>] ? do_sys_open+0x56/0x100
      [  110.828638]  [<ffffffff81027a22>] ? ia32_sysret+0x0/0x5
      [  110.828638] Code: 83 f1 01 e8 f5 ca ff ff 48 83 c4 50 5b 5d 41 5c c3 41
      57 41 56 41 55 49 89 fd 41 54 49 89 d4 55 48 89 f5 53 48 81 ec 18 01 00 00
      <8b> 06 89 c2 83 e2 08 83 fa 01 19 db 83 e3 f8 83 c3 18 a8 01 8d
      [  110.828638] RIP  [<ffffffff811247b7>] encode_attrs+0x1a/0x2a4
      [  110.828638]  RSP <ffff88003bf5b878>
      [  110.828638] CR2: 0000000000000000
      [  112.840396] ---[ end trace 95282e83fd77358f ]---
      
      We need to ensure that the O_EXCL flag is turned off if the user doesn't
      set O_CREAT.
      
      Cc: stable@kernel.org
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      0a377cff
    • T
      Merge branch 'fix/asoc' into for-linus · 2ea1ef57
      Takashi Iwai 提交于
      2ea1ef57
    • T
      Merge branch 'fix/hda' into for-linus · 76165a30
      Takashi Iwai 提交于
      76165a30
    • J
      ALSA: emu10k1 - delay the PCM interrupts (add pcm_irq_delay parameter) · 56385a12
      Jaroslav Kysela 提交于
      With some hardware combinations, the PCM interrupts are acknowledged
      before the period boundary from the emu10k1 chip. The midlevel PCM code
      gets confused and the playback stream is interrupted.
      
      It seems that the interrupt processing shift by 2 samples is enough
      to fix this issue. This default value does not harm other,
      non-affected hardware.
      
      More information: Kernel bugzilla bug#16300
      
      [A copmile warning fixed by tiwai]
      Signed-off-by: NJaroslav Kysela <perex@perex.cz>
      Cc: <stable@kernel.org>
      Signed-off-by: NTakashi Iwai <tiwai@suse.de>
      56385a12
    • N
      fs: brlock vfsmount_lock · 99b7db7b
      Nick Piggin 提交于
      fs: brlock vfsmount_lock
      
      Use a brlock for the vfsmount lock. It must be taken for write whenever
      modifying the mount hash or associated fields, and may be taken for read when
      performing mount hash lookups.
      
      A new lock is added for the mnt-id allocator, so it doesn't need to take
      the heavy vfsmount write-lock.
      
      The number of atomics should remain the same for fastpath rlock cases, though
      code would be slightly slower due to per-cpu access. Scalability is not not be
      much improved in common cases yet, due to other locks (ie. dcache_lock) getting
      in the way. However path lookups crossing mountpoints should be one case where
      scalability is improved (currently requiring the global lock).
      
      The slowpath is slower due to use of brlock. On a 64 core, 64 socket, 32 node
      Altix system (high latency to remote nodes), a simple umount microbenchmark
      (mount --bind mnt mnt2 ; umount mnt2 loop 1000 times), before this patch it
      took 6.8s, afterwards took 7.1s, about 5% slower.
      
      Cc: Al Viro <viro@ZenIV.linux.org.uk>
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      99b7db7b
    • N
      fs: scale files_lock · 6416ccb7
      Nick Piggin 提交于
      fs: scale files_lock
      
      Improve scalability of files_lock by adding per-cpu, per-sb files lists,
      protected with an lglock. The lglock provides fast access to the per-cpu lists
      to add and remove files. It also provides a snapshot of all the per-cpu lists
      (although this is very slow).
      
      One difficulty with this approach is that a file can be removed from the list
      by another CPU. We must track which per-cpu list the file is on with a new
      variale in the file struct (packed into a hole on 64-bit archs). Scalability
      could suffer if files are frequently removed from different cpu's list.
      
      However loads with frequent removal of files imply short interval between
      adding and removing the files, and the scheduler attempts to avoid moving
      processes too far away. Also, even in the case of cross-CPU removal, the
      hardware has much more opportunity to parallelise cacheline transfers with N
      cachelines than with 1.
      
      A worst-case test of 1 CPU allocating files subsequently being freed by N CPUs
      degenerates to contending on a single lock, which is no worse than before. When
      more than one CPU are allocating files, even if they are always freed by
      different CPUs, there will be more parallelism than the single-lock case.
      
      Testing results:
      
      On a 2 socket, 8 core opteron, I measure the number of times the lock is taken
      to remove the file, the number of times it is removed by the same CPU that
      added it, and the number of times it is removed by the same node that added it.
      
      Booting:    locks=  25049 cpu-hits=  23174 (92.5%) node-hits=  23945 (95.6%)
      kbuild -j16 locks=2281913 cpu-hits=2208126 (96.8%) node-hits=2252674 (98.7%)
      dbench 64   locks=4306582 cpu-hits=4287247 (99.6%) node-hits=4299527 (99.8%)
      
      So a file is removed from the same CPU it was added by over 90% of the time.
      It remains within the same node 95% of the time.
      
      Tim Chen ran some numbers for a 64 thread Nehalem system performing a compile.
      
                      throughput
      2.6.34-rc2      24.5
      +patch          24.9
      
                      us      sys     idle    IO wait (in %)
      2.6.34-rc2      51.25   28.25   17.25   3.25
      +patch          53.75   18.5    19      8.75
      
      So significantly less CPU time spent in kernel code, higher idle time and
      slightly higher throughput.
      
      Single threaded performance difference was within the noise of microbenchmarks.
      That is not to say penalty does not exist, the code is larger and more memory
      accesses required so it will be slightly slower.
      
      Cc: linux-kernel@vger.kernel.org
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      6416ccb7
    • N
      lglock: introduce special lglock and brlock spin locks · 2dc91abe
      Nick Piggin 提交于
      lglock: introduce special lglock and brlock spin locks
      
      This patch introduces "local-global" locks (lglocks). These can be used to:
      
      - Provide fast exclusive access to per-CPU data, with exclusive access to
        another CPU's data allowed but possibly subject to contention, and to provide
        very slow exclusive access to all per-CPU data.
      - Or to provide very fast and scalable read serialisation, and to provide
        very slow exclusive serialisation of data (not necessarily per-CPU data).
      
      Brlocks are also implemented as a short-hand notation for the latter use
      case.
      
      Thanks to Paul for local/global naming convention.
      
      Cc: linux-kernel@vger.kernel.org
      Cc: Al Viro <viro@ZenIV.linux.org.uk>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      2dc91abe
    • N
      tty: fix fu_list abuse · d996b62a
      Nick Piggin 提交于
      tty: fix fu_list abuse
      
      tty code abuses fu_list, which causes a bug in remount,ro handling.
      
      If a tty device node is opened on a filesystem, then the last link to the inode
      removed, the filesystem will be allowed to be remounted readonly. This is
      because fs_may_remount_ro does not find the 0 link tty inode on the file sb
      list (because the tty code incorrectly removed it to use for its own purpose).
      This can result in a filesystem with errors after it is marked "clean".
      
      Taking idea from Christoph's initial patch, allocate a tty private struct
      at file->private_data and put our required list fields in there, linking
      file and tty. This makes tty nodes behave the same way as other device nodes
      and avoid meddling with the vfs, and avoids this bug.
      
      The error handling is not trivial in the tty code, so for this bugfix, I take
      the simple approach of using __GFP_NOFAIL and don't worry about memory errors.
      This is not a problem because our allocator doesn't fail small allocs as a rule
      anyway. So proper error handling is left as an exercise for tty hackers.
      
      [ Arguably filesystem's device inode would ideally be divorced from the
      driver's pseudo inode when it is opened, but in practice it's not clear whether
      that will ever be worth implementing. ]
      
      Cc: linux-kernel@vger.kernel.org
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: Greg Kroah-Hartman <gregkh@suse.de>
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      d996b62a
    • N
      fs: cleanup files_lock locking · ee2ffa0d
      Nick Piggin 提交于
      fs: cleanup files_lock locking
      
      Lock tty_files with a new spinlock, tty_files_lock; provide helpers to
      manipulate the per-sb files list; unexport the files_lock spinlock.
      
      Cc: linux-kernel@vger.kernel.org
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Acked-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NGreg Kroah-Hartman <gregkh@suse.de>
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      ee2ffa0d
    • N
      fs: remove extra lookup in __lookup_hash · b04f784e
      Nick Piggin 提交于
      fs: remove extra lookup in __lookup_hash
      
      Optimize lookup for create operations, where no dentry should often be
      common-case. In cases where it is not, such as unlink, the added overhead
      is much smaller than the removed.
      
      Also, move comments about __d_lookup racyness to the __d_lookup call site.
      d_lookup is intuitive; __d_lookup is what needs commenting. So in that same
      vein, add kerneldoc comments to __d_lookup and clean up some of the comments:
      
      - We are interested in how the RCU lookup works here, particularly with
        renames. Make that explicit, and point to the document where it is explained
        in more detail.
      - RCU is pretty standard now, and macros make implementations pretty mindless.
        If we want to know about RCU barrier details, we look in RCU code.
      - Delete some boring legacy comments because we don't care much about how the
        code used to work, more about the interesting parts of how it works now. So
        comments about lazy LRU may be interesting, but would better be done in the
        LRU or refcount management code.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      b04f784e
    • N
      fs: fs_struct rwlock to spinlock · 2a4419b5
      Nick Piggin 提交于
      fs: fs_struct rwlock to spinlock
      
      struct fs_struct.lock is an rwlock with the read-side used to protect root and
      pwd members while taking references to them. Taking a reference to a path
      typically requires just 2 atomic ops, so the critical section is very small.
      Parallel read-side operations would have cacheline contention on the lock, the
      dentry, and the vfsmount cachelines, so the rwlock is unlikely to ever give a
      real parallelism increase.
      
      Replace it with a spinlock to avoid one or two atomic operations in typical
      path lookup fastpath.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      2a4419b5
    • N
      apparmor: use task path helpers · 44672e4f
      Nick Piggin 提交于
      apparmor: use task path helpers
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      44672e4f
    • N
      fs: dentry allocation consolidation · baa03890
      Nick Piggin 提交于
      fs: dentry allocation consolidation
      
      There are 2 duplicate copies of code in dentry allocation in path lookup.
      Consolidate them into a single function.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      baa03890