- 18 3月, 2016 1 次提交
-
-
由 Chen Gang 提交于
hlist_bl_unhashed() and hlist_bl_empty() are all boolean functions, so return bool instead of int. Signed-off-by: NChen Gang <gang.chen.5i5j@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 11月, 2015 1 次提交
-
-
由 Paul E. McKenney 提交于
Most of the list-empty-check macros (list_empty(), hlist_empty(), hlist_bl_empty(), hlist_nulls_empty(), and hlist_nulls_empty()) use an unadorned load to check the list header. Given that these macros are sometimes invoked without the protection of a lock, this is not sufficient. This commit therefore adds READ_ONCE() calls to them. This commit does not touch llist_empty() because it already has the needed ACCESS_ONCE(). Reported-by: NDmitry Vyukov <dvyukov@google.com> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
- 07 10月, 2015 1 次提交
-
-
由 Paul E. McKenney 提交于
The various RCU list-deletion macros (list_del_rcu(), hlist_del_init_rcu(), hlist_del_rcu(), hlist_bl_del_init_rcu(), hlist_bl_del_rcu(), hlist_nulls_del_init_rcu(), and hlist_nulls_del_rcu()) do plain stores into the ->next pointer of the preceding list elemment. Unfortunately, the compiler is within its rights to (for example) use byte-at-a-time writes to update the pointer, which would fatally confuse concurrent readers. This patch therefore adds the needed WRITE_ONCE() macros. KernelThreadSanitizer (KTSAN) reported the __hlist_del() issue, which is a problem when __hlist_del() is invoked by hlist_del_rcu(). Reported-by: NDmitry Vyukov <dvyukov@google.com> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
-
- 13 3月, 2013 1 次提交
-
-
由 Steven Whitehouse 提交于
Abhi noticed that we were getting a complaint from the RCU subsystem about access of an RCU protected list under the write side bit lock. This commit adds additional annotation to check both the RCU read lock and the write side bit lock before printing a message. Reported by: Abhijith Das <adas@redhat.com> Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com> Tested-by: NAbhijith Das <adas@redhat.com> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
- 26 4月, 2011 1 次提交
-
-
由 Christoph Hellwig 提交于
Now that the whole dcache_hash_bucket crap is gone, go all the way and also remove the weird locking layering violations for locking the hash buckets. Add hlist_bl_lock/unlock helpers to move the locking into the list abstraction instead of requiring each caller to open code it. After all allowing for the bit locks is the whole point of these helpers over the plain hlist variant. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 14 1月, 2011 2 次提交
-
-
由 Russell King 提交于
__d_rehash is dereferencing an almost-NULL pointer on my ARM926. CONFIG_SMP=n and CONFIG_DEBUG_SPINLOCK=y. The faulting instruction is: strne r3, [r2, #4] and as can be seen from the register dump below, r2 is 0x00000001, hence the faulting 0x00000005 address. __d_rehash is essentially: spin_lock_bucket(b); entry->d_flags &= ~DCACHE_UNHASHED; hlist_bl_add_head_rcu(&entry->d_hash, &b->head); spin_unlock_bucket(b); which is: bit_spin_lock(0, (unsigned long *)&b->head.first); entry->d_flags &= ~DCACHE_UNHASHED; hlist_bl_add_head_rcu(&entry->d_hash, &b->head); __bit_spin_unlock(0, (unsigned long *)&b->head.first); bit_spin_lock(0, ptr) sets bit 0 of *ptr, in this case b->head.first if CONFIG_SMP or CONFIG_DEBUG_SPINLOCK is set: #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) while (unlikely(test_and_set_bit_lock(bitnum, addr))) { while (test_bit(bitnum, addr)) { preempt_enable(); cpu_relax(); preempt_disable(); } } #endif So, b->head.first starts off NULL, and becomes a non-NULL (address 1). hlist_bl_add_head_rcu() does this: static inline void hlist_bl_add_head_rcu(struct hlist_bl_node *n, struct hlist_bl_head *h) { first = hlist_bl_first(h); n->next = first; if (first) first->pprev = &n->next; It is the store to first->pprev which is faulting. hlist_bl_first(): static inline struct hlist_bl_node *hlist_bl_first(struct hlist_bl_head *h) { return (struct hlist_bl_node *) ((unsigned long)h->first & ~LIST_BL_LOCKMASK); } but: #if defined(CONFIG_SMP) #define LIST_BL_LOCKMASK 1UL #else #define LIST_BL_LOCKMASK 0UL #endif So, we have one piece of code which sets bit 0 of addresses, and another bit of code which doesn't clear it before dereferencing the pointer if !CONFIG_SMP && CONFIG_DEBUG_SPINLOCK. With the patch below, I can again sucessfully boot the kernel on my Versatile PB/926 platform. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nick Piggin 提交于
Po-Yu Chuang <ratbert.chuang@gmail.com> noticed that hlist_bl_set_first could crash on a UP system when LIST_BL_LOCKMASK is 0, because LIST_BL_BUG_ON(!((unsigned long)h->first & LIST_BL_LOCKMASK)); always evaulates to true. Fix the expression, and also avoid a dependency between bit spinlock implementation and list bl code (list code shouldn't know anything except that bit 0 is set when adding and removing elements). Eventually if a good use case comes up, we might use this list to store 1 or more arbitrary bits of data, so it really shouldn't be tied to locking either, but for now they are helpful for debugging. Signed-off-by: NNick Piggin <npiggin@kernel.dk>
-
- 07 1月, 2011 1 次提交
-
-
由 Nick Piggin 提交于
Introduce a type of hlist that can support the use of the lowest bit in the hlist_head. This will be subsequently used to implement per-bucket bit spinlock for inode and dentry hashes, and may be useful in other cases such as network hashes. Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NNick Piggin <npiggin@kernel.dk>
-