提交 766ece54 编写于 作者: D David Sterba

btrfs: merge btrfs_set_lock_blocking_rw with it's caller

The last caller that does not have a fixed value of lock is
btrfs_set_path_blocking, that actually does the same conditional swtich
by the lock type so we can merge the branches together and remove the
helper.
Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: NDavid Sterba <dsterba@suse.com>
上级 970e74d9
......@@ -46,11 +46,18 @@ noinline void btrfs_set_path_blocking(struct btrfs_path *p)
for (i = 0; i < BTRFS_MAX_LEVEL; i++) {
if (!p->nodes[i] || !p->locks[i])
continue;
btrfs_set_lock_blocking_rw(p->nodes[i], p->locks[i]);
if (p->locks[i] == BTRFS_READ_LOCK)
/*
* If we currently have a spinning reader or writer lock this
* will bump the count of blocking holders and drop the
* spinlock.
*/
if (p->locks[i] == BTRFS_READ_LOCK) {
btrfs_set_lock_blocking_read(p->nodes[i]);
p->locks[i] = BTRFS_READ_LOCK_BLOCKING;
else if (p->locks[i] == BTRFS_WRITE_LOCK)
} else if (p->locks[i] == BTRFS_WRITE_LOCK) {
btrfs_set_lock_blocking_write(p->nodes[i]);
p->locks[i] = BTRFS_WRITE_LOCK_BLOCKING;
}
}
}
......
......@@ -39,16 +39,4 @@ static inline void btrfs_tree_unlock_rw(struct extent_buffer *eb, int rw)
BUG();
}
/*
* If we currently have a spinning reader or writer lock (indicated by the rw
* flag) this will bump the count of blocking holders and drop the spinlock.
*/
static inline void btrfs_set_lock_blocking_rw(struct extent_buffer *eb, int rw)
{
if (rw == BTRFS_WRITE_LOCK)
btrfs_set_lock_blocking_write(eb);
else if (rw == BTRFS_READ_LOCK)
btrfs_set_lock_blocking_read(eb);
}
#endif
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册