提交 add09690 编写于 作者: L Linus Torvalds

Merge branch 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mfasheh/ocfs2

* 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mfasheh/ocfs2: (32 commits)
  [PATCH] ocfs2: zero_user_page conversion
  ocfs2: Support xfs style space reservation ioctls
  ocfs2: support for removing file regions
  ocfs2: update truncate handling of partial clusters
  ocfs2: btree support for removal of arbirtrary extents
  ocfs2: Support creation of unwritten extents
  ocfs2: support writing of unwritten extents
  ocfs2: small cleanup of ocfs2_write_begin_nolock()
  ocfs2: btree changes for unwritten extents
  ocfs2: abstract btree growing calls
  ocfs2: use all extent block suballocators
  ocfs2: plug truncate into cached dealloc routines
  ocfs2: simplify deallocation locking
  ocfs2: harden buffer check during mapping of page blocks
  ocfs2: shared writeable mmap
  ocfs2: factor out write aops into nolock variants
  ocfs2: rework ocfs2_buffered_write_cluster()
  ocfs2: take ip_alloc_sem during entire truncate
  ocfs2: Add "preferred slot" mount option
  [KJ PATCH] Replacing memset(<addr>,0,PAGE_SIZE) with clear_page() in fs/ocfs2/dlm/dlmrecovery.c
  ...
...@@ -238,6 +238,8 @@ config_item_type. ...@@ -238,6 +238,8 @@ config_item_type.
struct config_group *(*make_group)(struct config_group *group, struct config_group *(*make_group)(struct config_group *group,
const char *name); const char *name);
int (*commit_item)(struct config_item *item); int (*commit_item)(struct config_item *item);
void (*disconnect_notify)(struct config_group *group,
struct config_item *item);
void (*drop_item)(struct config_group *group, void (*drop_item)(struct config_group *group,
struct config_item *item); struct config_item *item);
}; };
...@@ -268,6 +270,16 @@ the item in other threads, the memory is safe. It may take some time ...@@ -268,6 +270,16 @@ the item in other threads, the memory is safe. It may take some time
for the item to actually disappear from the subsystem's usage. But it for the item to actually disappear from the subsystem's usage. But it
is gone from configfs. is gone from configfs.
When drop_item() is called, the item's linkage has already been torn
down. It no longer has a reference on its parent and has no place in
the item hierarchy. If a client needs to do some cleanup before this
teardown happens, the subsystem can implement the
ct_group_ops->disconnect_notify() method. The method is called after
configfs has removed the item from the filesystem view but before the
item is removed from its parent group. Like drop_item(),
disconnect_notify() is void and cannot fail. Client subsystems should
not drop any references here, as they still must do it in drop_item().
A config_group cannot be removed while it still has child items. This A config_group cannot be removed while it still has child items. This
is implemented in the configfs rmdir(2) code. ->drop_item() will not be is implemented in the configfs rmdir(2) code. ->drop_item() will not be
called, as the item has not been dropped. rmdir(2) will fail, as the called, as the item has not been dropped. rmdir(2) will fail, as the
...@@ -280,18 +292,18 @@ tells configfs to make the subsystem appear in the file tree. ...@@ -280,18 +292,18 @@ tells configfs to make the subsystem appear in the file tree.
struct configfs_subsystem { struct configfs_subsystem {
struct config_group su_group; struct config_group su_group;
struct semaphore su_sem; struct mutex su_mutex;
}; };
int configfs_register_subsystem(struct configfs_subsystem *subsys); int configfs_register_subsystem(struct configfs_subsystem *subsys);
void configfs_unregister_subsystem(struct configfs_subsystem *subsys); void configfs_unregister_subsystem(struct configfs_subsystem *subsys);
A subsystem consists of a toplevel config_group and a semaphore. A subsystem consists of a toplevel config_group and a mutex.
The group is where child config_items are created. For a subsystem, The group is where child config_items are created. For a subsystem,
this group is usually defined statically. Before calling this group is usually defined statically. Before calling
configfs_register_subsystem(), the subsystem must have initialized the configfs_register_subsystem(), the subsystem must have initialized the
group via the usual group _init() functions, and it must also have group via the usual group _init() functions, and it must also have
initialized the semaphore. initialized the mutex.
When the register call returns, the subsystem is live, and it When the register call returns, the subsystem is live, and it
will be visible via configfs. At that point, mkdir(2) can be called and will be visible via configfs. At that point, mkdir(2) can be called and
the subsystem must be ready for it. the subsystem must be ready for it.
...@@ -303,7 +315,7 @@ subsystem/group and the simple_child item in configfs_example.c It ...@@ -303,7 +315,7 @@ subsystem/group and the simple_child item in configfs_example.c It
shows a trivial object displaying and storing an attribute, and a simple shows a trivial object displaying and storing an attribute, and a simple
group creating and destroying these children. group creating and destroying these children.
[Hierarchy Navigation and the Subsystem Semaphore] [Hierarchy Navigation and the Subsystem Mutex]
There is an extra bonus that configfs provides. The config_groups and There is an extra bonus that configfs provides. The config_groups and
config_items are arranged in a hierarchy due to the fact that they config_items are arranged in a hierarchy due to the fact that they
...@@ -314,19 +326,19 @@ and config_item->ci_parent structure members. ...@@ -314,19 +326,19 @@ and config_item->ci_parent structure members.
A subsystem can navigate the cg_children list and the ci_parent pointer A subsystem can navigate the cg_children list and the ci_parent pointer
to see the tree created by the subsystem. This can race with configfs' to see the tree created by the subsystem. This can race with configfs'
management of the hierarchy, so configfs uses the subsystem semaphore to management of the hierarchy, so configfs uses the subsystem mutex to
protect modifications. Whenever a subsystem wants to navigate the protect modifications. Whenever a subsystem wants to navigate the
hierarchy, it must do so under the protection of the subsystem hierarchy, it must do so under the protection of the subsystem
semaphore. mutex.
A subsystem will be prevented from acquiring the semaphore while a newly A subsystem will be prevented from acquiring the mutex while a newly
allocated item has not been linked into this hierarchy. Similarly, it allocated item has not been linked into this hierarchy. Similarly, it
will not be able to acquire the semaphore while a dropping item has not will not be able to acquire the mutex while a dropping item has not
yet been unlinked. This means that an item's ci_parent pointer will yet been unlinked. This means that an item's ci_parent pointer will
never be NULL while the item is in configfs, and that an item will only never be NULL while the item is in configfs, and that an item will only
be in its parent's cg_children list for the same duration. This allows be in its parent's cg_children list for the same duration. This allows
a subsystem to trust ci_parent and cg_children while they hold the a subsystem to trust ci_parent and cg_children while they hold the
semaphore. mutex.
[Item Aggregation Via symlink(2)] [Item Aggregation Via symlink(2)]
...@@ -386,6 +398,33 @@ As a consequence of this, default_groups cannot be removed directly via ...@@ -386,6 +398,33 @@ As a consequence of this, default_groups cannot be removed directly via
rmdir(2). They also are not considered when rmdir(2) on the parent rmdir(2). They also are not considered when rmdir(2) on the parent
group is checking for children. group is checking for children.
[Dependant Subsystems]
Sometimes other drivers depend on particular configfs items. For
example, ocfs2 mounts depend on a heartbeat region item. If that
region item is removed with rmdir(2), the ocfs2 mount must BUG or go
readonly. Not happy.
configfs provides two additional API calls: configfs_depend_item() and
configfs_undepend_item(). A client driver can call
configfs_depend_item() on an existing item to tell configfs that it is
depended on. configfs will then return -EBUSY from rmdir(2) for that
item. When the item is no longer depended on, the client driver calls
configfs_undepend_item() on it.
These API cannot be called underneath any configfs callbacks, as
they will conflict. They can block and allocate. A client driver
probably shouldn't calling them of its own gumption. Rather it should
be providing an API that external subsystems call.
How does this work? Imagine the ocfs2 mount process. When it mounts,
it asks for a heartbeat region item. This is done via a call into the
heartbeat code. Inside the heartbeat code, the region item is looked
up. Here, the heartbeat code calls configfs_depend_item(). If it
succeeds, then heartbeat knows the region is safe to give to ocfs2.
If it fails, it was being torn down anyway, and heartbeat can gracefully
pass up an error.
[Committable Items] [Committable Items]
NOTE: Committable items are currently unimplemented. NOTE: Committable items are currently unimplemented.
......
...@@ -453,7 +453,7 @@ static int __init configfs_example_init(void) ...@@ -453,7 +453,7 @@ static int __init configfs_example_init(void)
subsys = example_subsys[i]; subsys = example_subsys[i];
config_group_init(&subsys->su_group); config_group_init(&subsys->su_group);
init_MUTEX(&subsys->su_sem); mutex_init(&subsys->su_mutex);
ret = configfs_register_subsystem(subsys); ret = configfs_register_subsystem(subsys);
if (ret) { if (ret) {
printk(KERN_ERR "Error %d while registering subsystem %s\n", printk(KERN_ERR "Error %d while registering subsystem %s\n",
......
...@@ -29,6 +29,7 @@ ...@@ -29,6 +29,7 @@
struct configfs_dirent { struct configfs_dirent {
atomic_t s_count; atomic_t s_count;
int s_dependent_count;
struct list_head s_sibling; struct list_head s_sibling;
struct list_head s_children; struct list_head s_children;
struct list_head s_links; struct list_head s_links;
......
...@@ -355,6 +355,10 @@ static int configfs_detach_prep(struct dentry *dentry) ...@@ -355,6 +355,10 @@ static int configfs_detach_prep(struct dentry *dentry)
/* Mark that we've taken i_mutex */ /* Mark that we've taken i_mutex */
sd->s_type |= CONFIGFS_USET_DROPPING; sd->s_type |= CONFIGFS_USET_DROPPING;
/*
* Yup, recursive. If there's a problem, blame
* deep nesting of default_groups
*/
ret = configfs_detach_prep(sd->s_dentry); ret = configfs_detach_prep(sd->s_dentry);
if (!ret) if (!ret)
continue; continue;
...@@ -562,7 +566,7 @@ static int populate_groups(struct config_group *group) ...@@ -562,7 +566,7 @@ static int populate_groups(struct config_group *group)
/* /*
* All of link_obj/unlink_obj/link_group/unlink_group require that * All of link_obj/unlink_obj/link_group/unlink_group require that
* subsys->su_sem is held. * subsys->su_mutex is held.
*/ */
static void unlink_obj(struct config_item *item) static void unlink_obj(struct config_item *item)
...@@ -713,6 +717,28 @@ static void configfs_detach_group(struct config_item *item) ...@@ -713,6 +717,28 @@ static void configfs_detach_group(struct config_item *item)
configfs_detach_item(item); configfs_detach_item(item);
} }
/*
* After the item has been detached from the filesystem view, we are
* ready to tear it out of the hierarchy. Notify the client before
* we do that so they can perform any cleanup that requires
* navigating the hierarchy. A client does not need to provide this
* callback. The subsystem semaphore MUST be held by the caller, and
* references must be valid for both items. It also assumes the
* caller has validated ci_type.
*/
static void client_disconnect_notify(struct config_item *parent_item,
struct config_item *item)
{
struct config_item_type *type;
type = parent_item->ci_type;
BUG_ON(!type);
if (type->ct_group_ops && type->ct_group_ops->disconnect_notify)
type->ct_group_ops->disconnect_notify(to_config_group(parent_item),
item);
}
/* /*
* Drop the initial reference from make_item()/make_group() * Drop the initial reference from make_item()/make_group()
* This function assumes that reference is held on item * This function assumes that reference is held on item
...@@ -738,6 +764,239 @@ static void client_drop_item(struct config_item *parent_item, ...@@ -738,6 +764,239 @@ static void client_drop_item(struct config_item *parent_item,
config_item_put(item); config_item_put(item);
} }
#ifdef DEBUG
static void configfs_dump_one(struct configfs_dirent *sd, int level)
{
printk(KERN_INFO "%*s\"%s\":\n", level, " ", configfs_get_name(sd));
#define type_print(_type) if (sd->s_type & _type) printk(KERN_INFO "%*s %s\n", level, " ", #_type);
type_print(CONFIGFS_ROOT);
type_print(CONFIGFS_DIR);
type_print(CONFIGFS_ITEM_ATTR);
type_print(CONFIGFS_ITEM_LINK);
type_print(CONFIGFS_USET_DIR);
type_print(CONFIGFS_USET_DEFAULT);
type_print(CONFIGFS_USET_DROPPING);
#undef type_print
}
static int configfs_dump(struct configfs_dirent *sd, int level)
{
struct configfs_dirent *child_sd;
int ret = 0;
configfs_dump_one(sd, level);
if (!(sd->s_type & (CONFIGFS_DIR|CONFIGFS_ROOT)))
return 0;
list_for_each_entry(child_sd, &sd->s_children, s_sibling) {
ret = configfs_dump(child_sd, level + 2);
if (ret)
break;
}
return ret;
}
#endif
/*
* configfs_depend_item() and configfs_undepend_item()
*
* WARNING: Do not call these from a configfs callback!
*
* This describes these functions and their helpers.
*
* Allow another kernel system to depend on a config_item. If this
* happens, the item cannot go away until the dependant can live without
* it. The idea is to give client modules as simple an interface as
* possible. When a system asks them to depend on an item, they just
* call configfs_depend_item(). If the item is live and the client
* driver is in good shape, we'll happily do the work for them.
*
* Why is the locking complex? Because configfs uses the VFS to handle
* all locking, but this function is called outside the normal
* VFS->configfs path. So it must take VFS locks to prevent the
* VFS->configfs stuff (configfs_mkdir(), configfs_rmdir(), etc). This is
* why you can't call these functions underneath configfs callbacks.
*
* Note, btw, that this can be called at *any* time, even when a configfs
* subsystem isn't registered, or when configfs is loading or unloading.
* Just like configfs_register_subsystem(). So we take the same
* precautions. We pin the filesystem. We lock each i_mutex _in_order_
* on our way down the tree. If we can find the target item in the
* configfs tree, it must be part of the subsystem tree as well, so we
* do not need the subsystem semaphore. Holding the i_mutex chain locks
* out mkdir() and rmdir(), who might be racing us.
*/
/*
* configfs_depend_prep()
*
* Only subdirectories count here. Files (CONFIGFS_NOT_PINNED) are
* attributes. This is similar but not the same to configfs_detach_prep().
* Note that configfs_detach_prep() expects the parent to be locked when it
* is called, but we lock the parent *inside* configfs_depend_prep(). We
* do that so we can unlock it if we find nothing.
*
* Here we do a depth-first search of the dentry hierarchy looking for
* our object. We take i_mutex on each step of the way down. IT IS
* ESSENTIAL THAT i_mutex LOCKING IS ORDERED. If we come back up a branch,
* we'll drop the i_mutex.
*
* If the target is not found, -ENOENT is bubbled up and we have released
* all locks. If the target was found, the locks will be cleared by
* configfs_depend_rollback().
*
* This adds a requirement that all config_items be unique!
*
* This is recursive because the locking traversal is tricky. There isn't
* much on the stack, though, so folks that need this function - be careful
* about your stack! Patches will be accepted to make it iterative.
*/
static int configfs_depend_prep(struct dentry *origin,
struct config_item *target)
{
struct configfs_dirent *child_sd, *sd = origin->d_fsdata;
int ret = 0;
BUG_ON(!origin || !sd);
/* Lock this guy on the way down */
mutex_lock(&sd->s_dentry->d_inode->i_mutex);
if (sd->s_element == target) /* Boo-yah */
goto out;
list_for_each_entry(child_sd, &sd->s_children, s_sibling) {
if (child_sd->s_type & CONFIGFS_DIR) {
ret = configfs_depend_prep(child_sd->s_dentry,
target);
if (!ret)
goto out; /* Child path boo-yah */
}
}
/* We looped all our children and didn't find target */
mutex_unlock(&sd->s_dentry->d_inode->i_mutex);
ret = -ENOENT;
out:
return ret;
}
/*
* This is ONLY called if configfs_depend_prep() did its job. So we can
* trust the entire path from item back up to origin.
*
* We walk backwards from item, unlocking each i_mutex. We finish by
* unlocking origin.
*/
static void configfs_depend_rollback(struct dentry *origin,
struct config_item *item)
{
struct dentry *dentry = item->ci_dentry;
while (dentry != origin) {
mutex_unlock(&dentry->d_inode->i_mutex);
dentry = dentry->d_parent;
}
mutex_unlock(&origin->d_inode->i_mutex);
}
int configfs_depend_item(struct configfs_subsystem *subsys,
struct config_item *target)
{
int ret;
struct configfs_dirent *p, *root_sd, *subsys_sd = NULL;
struct config_item *s_item = &subsys->su_group.cg_item;
/*
* Pin the configfs filesystem. This means we can safely access
* the root of the configfs filesystem.
*/
ret = configfs_pin_fs();
if (ret)
return ret;
/*
* Next, lock the root directory. We're going to check that the
* subsystem is really registered, and so we need to lock out
* configfs_[un]register_subsystem().
*/
mutex_lock(&configfs_sb->s_root->d_inode->i_mutex);
root_sd = configfs_sb->s_root->d_fsdata;
list_for_each_entry(p, &root_sd->s_children, s_sibling) {
if (p->s_type & CONFIGFS_DIR) {
if (p->s_element == s_item) {
subsys_sd = p;
break;
}
}
}
if (!subsys_sd) {
ret = -ENOENT;
goto out_unlock_fs;
}
/* Ok, now we can trust subsys/s_item */
/* Scan the tree, locking i_mutex recursively, return 0 if found */
ret = configfs_depend_prep(subsys_sd->s_dentry, target);
if (ret)
goto out_unlock_fs;
/* We hold all i_mutexes from the subsystem down to the target */
p = target->ci_dentry->d_fsdata;
p->s_dependent_count += 1;
configfs_depend_rollback(subsys_sd->s_dentry, target);
out_unlock_fs:
mutex_unlock(&configfs_sb->s_root->d_inode->i_mutex);
/*
* If we succeeded, the fs is pinned via other methods. If not,
* we're done with it anyway. So release_fs() is always right.
*/
configfs_release_fs();
return ret;
}
EXPORT_SYMBOL(configfs_depend_item);
/*
* Release the dependent linkage. This is much simpler than
* configfs_depend_item() because we know that that the client driver is
* pinned, thus the subsystem is pinned, and therefore configfs is pinned.
*/
void configfs_undepend_item(struct configfs_subsystem *subsys,
struct config_item *target)
{
struct configfs_dirent *sd;
/*
* Since we can trust everything is pinned, we just need i_mutex
* on the item.
*/
mutex_lock(&target->ci_dentry->d_inode->i_mutex);
sd = target->ci_dentry->d_fsdata;
BUG_ON(sd->s_dependent_count < 1);
sd->s_dependent_count -= 1;
/*
* After this unlock, we cannot trust the item to stay alive!
* DO NOT REFERENCE item after this unlock.
*/
mutex_unlock(&target->ci_dentry->d_inode->i_mutex);
}
EXPORT_SYMBOL(configfs_undepend_item);
static int configfs_mkdir(struct inode *dir, struct dentry *dentry, int mode) static int configfs_mkdir(struct inode *dir, struct dentry *dentry, int mode)
{ {
...@@ -783,7 +1042,7 @@ static int configfs_mkdir(struct inode *dir, struct dentry *dentry, int mode) ...@@ -783,7 +1042,7 @@ static int configfs_mkdir(struct inode *dir, struct dentry *dentry, int mode)
snprintf(name, dentry->d_name.len + 1, "%s", dentry->d_name.name); snprintf(name, dentry->d_name.len + 1, "%s", dentry->d_name.name);
down(&subsys->su_sem); mutex_lock(&subsys->su_mutex);
group = NULL; group = NULL;
item = NULL; item = NULL;
if (type->ct_group_ops->make_group) { if (type->ct_group_ops->make_group) {
...@@ -797,7 +1056,7 @@ static int configfs_mkdir(struct inode *dir, struct dentry *dentry, int mode) ...@@ -797,7 +1056,7 @@ static int configfs_mkdir(struct inode *dir, struct dentry *dentry, int mode)
if (item) if (item)
link_obj(parent_item, item); link_obj(parent_item, item);
} }
up(&subsys->su_sem); mutex_unlock(&subsys->su_mutex);
kfree(name); kfree(name);
if (!item) { if (!item) {
...@@ -841,13 +1100,16 @@ static int configfs_mkdir(struct inode *dir, struct dentry *dentry, int mode) ...@@ -841,13 +1100,16 @@ static int configfs_mkdir(struct inode *dir, struct dentry *dentry, int mode)
out_unlink: out_unlink:
if (ret) { if (ret) {
/* Tear down everything we built up */ /* Tear down everything we built up */
down(&subsys->su_sem); mutex_lock(&subsys->su_mutex);
client_disconnect_notify(parent_item, item);
if (group) if (group)
unlink_group(group); unlink_group(group);
else else
unlink_obj(item); unlink_obj(item);
client_drop_item(parent_item, item); client_drop_item(parent_item, item);
up(&subsys->su_sem);
mutex_unlock(&subsys->su_mutex);
if (module_got) if (module_got)
module_put(owner); module_put(owner);
...@@ -881,6 +1143,13 @@ static int configfs_rmdir(struct inode *dir, struct dentry *dentry) ...@@ -881,6 +1143,13 @@ static int configfs_rmdir(struct inode *dir, struct dentry *dentry)
if (sd->s_type & CONFIGFS_USET_DEFAULT) if (sd->s_type & CONFIGFS_USET_DEFAULT)
return -EPERM; return -EPERM;
/*
* Here's where we check for dependents. We're protected by
* i_mutex.
*/
if (sd->s_dependent_count)
return -EBUSY;
/* Get a working ref until we have the child */ /* Get a working ref until we have the child */
parent_item = configfs_get_config_item(dentry->d_parent); parent_item = configfs_get_config_item(dentry->d_parent);
subsys = to_config_group(parent_item)->cg_subsys; subsys = to_config_group(parent_item)->cg_subsys;
...@@ -910,17 +1179,19 @@ static int configfs_rmdir(struct inode *dir, struct dentry *dentry) ...@@ -910,17 +1179,19 @@ static int configfs_rmdir(struct inode *dir, struct dentry *dentry)
if (sd->s_type & CONFIGFS_USET_DIR) { if (sd->s_type & CONFIGFS_USET_DIR) {
configfs_detach_group(item); configfs_detach_group(item);
down(&subsys->su_sem); mutex_lock(&subsys->su_mutex);
client_disconnect_notify(parent_item, item);
unlink_group(to_config_group(item)); unlink_group(to_config_group(item));
} else { } else {
configfs_detach_item(item); configfs_detach_item(item);
down(&subsys->su_sem); mutex_lock(&subsys->su_mutex);
client_disconnect_notify(parent_item, item);
unlink_obj(item); unlink_obj(item);
} }
client_drop_item(parent_item, item); client_drop_item(parent_item, item);
up(&subsys->su_sem); mutex_unlock(&subsys->su_mutex);
/* Drop our reference from above */ /* Drop our reference from above */
config_item_put(item); config_item_put(item);
......
...@@ -27,19 +27,26 @@ ...@@ -27,19 +27,26 @@
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/mutex.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/semaphore.h>
#include <linux/configfs.h> #include <linux/configfs.h>
#include "configfs_internal.h" #include "configfs_internal.h"
/*
* A simple attribute can only be 4096 characters. Why 4k? Because the
* original code limited it to PAGE_SIZE. That's a bad idea, though,
* because an attribute of 16k on ia64 won't work on x86. So we limit to
* 4k, our minimum common page size.
*/
#define SIMPLE_ATTR_SIZE 4096
struct configfs_buffer { struct configfs_buffer {
size_t count; size_t count;
loff_t pos; loff_t pos;
char * page; char * page;
struct configfs_item_operations * ops; struct configfs_item_operations * ops;
struct semaphore sem; struct mutex mutex;
int needs_read_fill; int needs_read_fill;
}; };
...@@ -69,7 +76,7 @@ static int fill_read_buffer(struct dentry * dentry, struct configfs_buffer * buf ...@@ -69,7 +76,7 @@ static int fill_read_buffer(struct dentry * dentry, struct configfs_buffer * buf
count = ops->show_attribute(item,attr,buffer->page); count = ops->show_attribute(item,attr,buffer->page);
buffer->needs_read_fill = 0; buffer->needs_read_fill = 0;
BUG_ON(count > (ssize_t)PAGE_SIZE); BUG_ON(count > (ssize_t)SIMPLE_ATTR_SIZE);
if (count >= 0) if (count >= 0)
buffer->count = count; buffer->count = count;
else else
...@@ -102,7 +109,7 @@ configfs_read_file(struct file *file, char __user *buf, size_t count, loff_t *pp ...@@ -102,7 +109,7 @@ configfs_read_file(struct file *file, char __user *buf, size_t count, loff_t *pp
struct configfs_buffer * buffer = file->private_data; struct configfs_buffer * buffer = file->private_data;
ssize_t retval = 0; ssize_t retval = 0;
down(&buffer->sem); mutex_lock(&buffer->mutex);
if (buffer->needs_read_fill) { if (buffer->needs_read_fill) {
if ((retval = fill_read_buffer(file->f_path.dentry,buffer))) if ((retval = fill_read_buffer(file->f_path.dentry,buffer)))
goto out; goto out;
...@@ -112,7 +119,7 @@ configfs_read_file(struct file *file, char __user *buf, size_t count, loff_t *pp ...@@ -112,7 +119,7 @@ configfs_read_file(struct file *file, char __user *buf, size_t count, loff_t *pp
retval = simple_read_from_buffer(buf, count, ppos, buffer->page, retval = simple_read_from_buffer(buf, count, ppos, buffer->page,
buffer->count); buffer->count);
out: out:
up(&buffer->sem); mutex_unlock(&buffer->mutex);
return retval; return retval;
} }
...@@ -137,8 +144,8 @@ fill_write_buffer(struct configfs_buffer * buffer, const char __user * buf, size ...@@ -137,8 +144,8 @@ fill_write_buffer(struct configfs_buffer * buffer, const char __user * buf, size
if (!buffer->page) if (!buffer->page)
return -ENOMEM; return -ENOMEM;
if (count >= PAGE_SIZE) if (count >= SIMPLE_ATTR_SIZE)
count = PAGE_SIZE - 1; count = SIMPLE_ATTR_SIZE - 1;
error = copy_from_user(buffer->page,buf,count); error = copy_from_user(buffer->page,buf,count);
buffer->needs_read_fill = 1; buffer->needs_read_fill = 1;
/* if buf is assumed to contain a string, terminate it by \0, /* if buf is assumed to contain a string, terminate it by \0,
...@@ -193,13 +200,13 @@ configfs_write_file(struct file *file, const char __user *buf, size_t count, lof ...@@ -193,13 +200,13 @@ configfs_write_file(struct file *file, const char __user *buf, size_t count, lof
struct configfs_buffer * buffer = file->private_data; struct configfs_buffer * buffer = file->private_data;
ssize_t len; ssize_t len;
down(&buffer->sem); mutex_lock(&buffer->mutex);
len = fill_write_buffer(buffer, buf, count); len = fill_write_buffer(buffer, buf, count);
if (len > 0) if (len > 0)
len = flush_write_buffer(file->f_path.dentry, buffer, count); len = flush_write_buffer(file->f_path.dentry, buffer, count);
if (len > 0) if (len > 0)
*ppos += len; *ppos += len;
up(&buffer->sem); mutex_unlock(&buffer->mutex);
return len; return len;
} }
...@@ -253,7 +260,7 @@ static int check_perm(struct inode * inode, struct file * file) ...@@ -253,7 +260,7 @@ static int check_perm(struct inode * inode, struct file * file)
error = -ENOMEM; error = -ENOMEM;
goto Enomem; goto Enomem;
} }
init_MUTEX(&buffer->sem); mutex_init(&buffer->mutex);
buffer->needs_read_fill = 1; buffer->needs_read_fill = 1;
buffer->ops = ops; buffer->ops = ops;
file->private_data = buffer; file->private_data = buffer;
...@@ -292,6 +299,7 @@ static int configfs_release(struct inode * inode, struct file * filp) ...@@ -292,6 +299,7 @@ static int configfs_release(struct inode * inode, struct file * filp)
if (buffer) { if (buffer) {
if (buffer->page) if (buffer->page)
free_page((unsigned long)buffer->page); free_page((unsigned long)buffer->page);
mutex_destroy(&buffer->mutex);
kfree(buffer); kfree(buffer);
} }
return 0; return 0;
......
...@@ -62,7 +62,6 @@ void config_item_init(struct config_item * item) ...@@ -62,7 +62,6 @@ void config_item_init(struct config_item * item)
* dynamically allocated string that @item->ci_name points to. * dynamically allocated string that @item->ci_name points to.
* Otherwise, use the static @item->ci_namebuf array. * Otherwise, use the static @item->ci_namebuf array.
*/ */
int config_item_set_name(struct config_item * item, const char * fmt, ...) int config_item_set_name(struct config_item * item, const char * fmt, ...)
{ {
int error = 0; int error = 0;
...@@ -139,12 +138,7 @@ struct config_item * config_item_get(struct config_item * item) ...@@ -139,12 +138,7 @@ struct config_item * config_item_get(struct config_item * item)
return item; return item;
} }
/** static void config_item_cleanup(struct config_item * item)
* config_item_cleanup - free config_item resources.
* @item: item.
*/
void config_item_cleanup(struct config_item * item)
{ {
struct config_item_type * t = item->ci_type; struct config_item_type * t = item->ci_type;
struct config_group * s = item->ci_group; struct config_group * s = item->ci_group;
...@@ -179,35 +173,31 @@ void config_item_put(struct config_item * item) ...@@ -179,35 +173,31 @@ void config_item_put(struct config_item * item)
kref_put(&item->ci_kref, config_item_release); kref_put(&item->ci_kref, config_item_release);
} }
/** /**
* config_group_init - initialize a group for use * config_group_init - initialize a group for use
* @k: group * @k: group
*/ */
void config_group_init(struct config_group *group) void config_group_init(struct config_group *group)
{ {
config_item_init(&group->cg_item); config_item_init(&group->cg_item);
INIT_LIST_HEAD(&group->cg_children); INIT_LIST_HEAD(&group->cg_children);
} }
/** /**
* config_group_find_obj - search for item in group. * config_group_find_item - search for item in group.
* @group: group we're looking in. * @group: group we're looking in.
* @name: item's name. * @name: item's name.
* *
* Lock group via @group->cg_subsys, and iterate over @group->cg_list, * Iterate over @group->cg_list, looking for a matching config_item.
* looking for a matching config_item. If matching item is found * If matching item is found take a reference and return the item.
* take a reference and return the item. * Caller must have locked group via @group->cg_subsys->su_mtx.
*/ */
struct config_item *config_group_find_item(struct config_group *group,
struct config_item * config_group_find_obj(struct config_group * group, const char * name) const char *name)
{ {
struct list_head * entry; struct list_head * entry;
struct config_item * ret = NULL; struct config_item * ret = NULL;
/* XXX LOCKING! */
list_for_each(entry,&group->cg_children) { list_for_each(entry,&group->cg_children) {
struct config_item * item = to_item(entry); struct config_item * item = to_item(entry);
if (config_item_name(item) && if (config_item_name(item) &&
...@@ -219,9 +209,8 @@ struct config_item * config_group_find_obj(struct config_group * group, const ch ...@@ -219,9 +209,8 @@ struct config_item * config_group_find_obj(struct config_group * group, const ch
return ret; return ret;
} }
EXPORT_SYMBOL(config_item_init); EXPORT_SYMBOL(config_item_init);
EXPORT_SYMBOL(config_group_init); EXPORT_SYMBOL(config_group_init);
EXPORT_SYMBOL(config_item_get); EXPORT_SYMBOL(config_item_get);
EXPORT_SYMBOL(config_item_put); EXPORT_SYMBOL(config_item_put);
EXPORT_SYMBOL(config_group_find_obj); EXPORT_SYMBOL(config_group_find_item);
...@@ -133,14 +133,6 @@ static ssize_t cluster_set(struct cluster *cl, unsigned int *cl_field, ...@@ -133,14 +133,6 @@ static ssize_t cluster_set(struct cluster *cl, unsigned int *cl_field,
return len; return len;
} }
#define __CONFIGFS_ATTR(_name,_mode,_read,_write) { \
.attr = { .ca_name = __stringify(_name), \
.ca_mode = _mode, \
.ca_owner = THIS_MODULE }, \
.show = _read, \
.store = _write, \
}
#define CLUSTER_ATTR(name, check_zero) \ #define CLUSTER_ATTR(name, check_zero) \
static ssize_t name##_write(struct cluster *cl, const char *buf, size_t len) \ static ssize_t name##_write(struct cluster *cl, const char *buf, size_t len) \
{ \ { \
...@@ -615,7 +607,7 @@ static struct clusters clusters_root = { ...@@ -615,7 +607,7 @@ static struct clusters clusters_root = {
int dlm_config_init(void) int dlm_config_init(void)
{ {
config_group_init(&clusters_root.subsys.su_group); config_group_init(&clusters_root.subsys.su_group);
init_MUTEX(&clusters_root.subsys.su_sem); mutex_init(&clusters_root.subsys.su_mutex);
return configfs_register_subsystem(&clusters_root.subsys); return configfs_register_subsystem(&clusters_root.subsys);
} }
...@@ -759,9 +751,9 @@ static struct space *get_space(char *name) ...@@ -759,9 +751,9 @@ static struct space *get_space(char *name)
if (!space_list) if (!space_list)
return NULL; return NULL;
down(&space_list->cg_subsys->su_sem); mutex_lock(&space_list->cg_subsys->su_mutex);
i = config_group_find_obj(space_list, name); i = config_group_find_item(space_list, name);
up(&space_list->cg_subsys->su_sem); mutex_unlock(&space_list->cg_subsys->su_mutex);
return to_space(i); return to_space(i);
} }
...@@ -780,7 +772,7 @@ static struct comm *get_comm(int nodeid, struct sockaddr_storage *addr) ...@@ -780,7 +772,7 @@ static struct comm *get_comm(int nodeid, struct sockaddr_storage *addr)
if (!comm_list) if (!comm_list)
return NULL; return NULL;
down(&clusters_root.subsys.su_sem); mutex_lock(&clusters_root.subsys.su_mutex);
list_for_each_entry(i, &comm_list->cg_children, ci_entry) { list_for_each_entry(i, &comm_list->cg_children, ci_entry) {
cm = to_comm(i); cm = to_comm(i);
...@@ -800,7 +792,7 @@ static struct comm *get_comm(int nodeid, struct sockaddr_storage *addr) ...@@ -800,7 +792,7 @@ static struct comm *get_comm(int nodeid, struct sockaddr_storage *addr)
break; break;
} }
} }
up(&clusters_root.subsys.su_sem); mutex_unlock(&clusters_root.subsys.su_mutex);
if (!found) if (!found)
cm = NULL; cm = NULL;
......
此差异已折叠。
...@@ -34,7 +34,17 @@ int ocfs2_insert_extent(struct ocfs2_super *osb, ...@@ -34,7 +34,17 @@ int ocfs2_insert_extent(struct ocfs2_super *osb,
u32 cpos, u32 cpos,
u64 start_blk, u64 start_blk,
u32 new_clusters, u32 new_clusters,
u8 flags,
struct ocfs2_alloc_context *meta_ac); struct ocfs2_alloc_context *meta_ac);
struct ocfs2_cached_dealloc_ctxt;
int ocfs2_mark_extent_written(struct inode *inode, struct buffer_head *di_bh,
handle_t *handle, u32 cpos, u32 len, u32 phys,
struct ocfs2_alloc_context *meta_ac,
struct ocfs2_cached_dealloc_ctxt *dealloc);
int ocfs2_remove_extent(struct inode *inode, struct buffer_head *di_bh,
u32 cpos, u32 len, handle_t *handle,
struct ocfs2_alloc_context *meta_ac,
struct ocfs2_cached_dealloc_ctxt *dealloc);
int ocfs2_num_free_extents(struct ocfs2_super *osb, int ocfs2_num_free_extents(struct ocfs2_super *osb,
struct inode *inode, struct inode *inode,
struct ocfs2_dinode *fe); struct ocfs2_dinode *fe);
...@@ -62,17 +72,41 @@ int ocfs2_begin_truncate_log_recovery(struct ocfs2_super *osb, ...@@ -62,17 +72,41 @@ int ocfs2_begin_truncate_log_recovery(struct ocfs2_super *osb,
struct ocfs2_dinode **tl_copy); struct ocfs2_dinode **tl_copy);
int ocfs2_complete_truncate_log_recovery(struct ocfs2_super *osb, int ocfs2_complete_truncate_log_recovery(struct ocfs2_super *osb,
struct ocfs2_dinode *tl_copy); struct ocfs2_dinode *tl_copy);
int ocfs2_truncate_log_needs_flush(struct ocfs2_super *osb);
int ocfs2_truncate_log_append(struct ocfs2_super *osb,
handle_t *handle,
u64 start_blk,
unsigned int num_clusters);
int __ocfs2_flush_truncate_log(struct ocfs2_super *osb);
/*
* Process local structure which describes the block unlinks done
* during an operation. This is populated via
* ocfs2_cache_block_dealloc().
*
* ocfs2_run_deallocs() should be called after the potentially
* de-allocating routines. No journal handles should be open, and most
* locks should have been dropped.
*/
struct ocfs2_cached_dealloc_ctxt {
struct ocfs2_per_slot_free_list *c_first_suballocator;
};
static inline void ocfs2_init_dealloc_ctxt(struct ocfs2_cached_dealloc_ctxt *c)
{
c->c_first_suballocator = NULL;
}
int ocfs2_run_deallocs(struct ocfs2_super *osb,
struct ocfs2_cached_dealloc_ctxt *ctxt);
struct ocfs2_truncate_context { struct ocfs2_truncate_context {
struct inode *tc_ext_alloc_inode; struct ocfs2_cached_dealloc_ctxt tc_dealloc;
struct buffer_head *tc_ext_alloc_bh;
int tc_ext_alloc_locked; /* is it cluster locked? */ int tc_ext_alloc_locked; /* is it cluster locked? */
/* these get destroyed once it's passed to ocfs2_commit_truncate. */ /* these get destroyed once it's passed to ocfs2_commit_truncate. */
struct buffer_head *tc_last_eb_bh; struct buffer_head *tc_last_eb_bh;
}; };
int ocfs2_zero_tail_for_truncate(struct inode *inode, handle_t *handle, int ocfs2_zero_range_for_truncate(struct inode *inode, handle_t *handle,
u64 new_i_size); u64 range_start, u64 range_end);
int ocfs2_prepare_truncate(struct ocfs2_super *osb, int ocfs2_prepare_truncate(struct ocfs2_super *osb,
struct inode *inode, struct inode *inode,
struct buffer_head *fe_bh, struct buffer_head *fe_bh,
...@@ -84,6 +118,7 @@ int ocfs2_commit_truncate(struct ocfs2_super *osb, ...@@ -84,6 +118,7 @@ int ocfs2_commit_truncate(struct ocfs2_super *osb,
int ocfs2_find_leaf(struct inode *inode, struct ocfs2_extent_list *root_el, int ocfs2_find_leaf(struct inode *inode, struct ocfs2_extent_list *root_el,
u32 cpos, struct buffer_head **leaf_bh); u32 cpos, struct buffer_head **leaf_bh);
int ocfs2_search_extent_list(struct ocfs2_extent_list *el, u32 v_cluster);
/* /*
* Helper function to look at the # of clusters in an extent record. * Helper function to look at the # of clusters in an extent record.
......
此差异已折叠。
...@@ -42,57 +42,22 @@ int walk_page_buffers( handle_t *handle, ...@@ -42,57 +42,22 @@ int walk_page_buffers( handle_t *handle,
int (*fn)( handle_t *handle, int (*fn)( handle_t *handle,
struct buffer_head *bh)); struct buffer_head *bh));
struct ocfs2_write_ctxt; int ocfs2_write_begin(struct file *file, struct address_space *mapping,
typedef int (ocfs2_page_writer)(struct inode *, struct ocfs2_write_ctxt *, loff_t pos, unsigned len, unsigned flags,
u64 *, unsigned int *, unsigned int *); struct page **pagep, void **fsdata);
ssize_t ocfs2_buffered_write_cluster(struct file *file, loff_t pos, int ocfs2_write_end(struct file *file, struct address_space *mapping,
size_t count, ocfs2_page_writer *actor, loff_t pos, unsigned len, unsigned copied,
void *priv); struct page *page, void *fsdata);
struct ocfs2_write_ctxt { int ocfs2_write_end_nolock(struct address_space *mapping,
size_t w_count; loff_t pos, unsigned len, unsigned copied,
loff_t w_pos; struct page *page, void *fsdata);
u32 w_cpos;
unsigned int w_finished_copy;
/* This is true if page_size > cluster_size */ int ocfs2_write_begin_nolock(struct address_space *mapping,
unsigned int w_large_pages; loff_t pos, unsigned len, unsigned flags,
struct page **pagep, void **fsdata,
/* Filler callback and private data */ struct buffer_head *di_bh, struct page *mmap_page);
ocfs2_page_writer *w_write_data_page;
void *w_private;
/* Only valid for the filler callback */
struct page *w_this_page;
unsigned int w_this_page_new;
};
struct ocfs2_buffered_write_priv {
char *b_src_buf;
const struct iovec *b_cur_iov; /* Current iovec */
size_t b_cur_off; /* Offset in the
* current iovec */
};
int ocfs2_map_and_write_user_data(struct inode *inode,
struct ocfs2_write_ctxt *wc,
u64 *p_blkno,
unsigned int *ret_from,
unsigned int *ret_to);
struct ocfs2_splice_write_priv {
struct splice_desc *s_sd;
struct pipe_buffer *s_buf;
struct pipe_inode_info *s_pipe;
/* Neither offset value is ever larger than one page */
unsigned int s_offset;
unsigned int s_buf_offset;
};
int ocfs2_map_and_write_splice_data(struct inode *inode,
struct ocfs2_write_ctxt *wc,
u64 *p_blkno,
unsigned int *ret_from,
unsigned int *ret_to);
/* all ocfs2_dio_end_io()'s fault */ /* all ocfs2_dio_end_io()'s fault */
#define ocfs2_iocb_is_rw_locked(iocb) \ #define ocfs2_iocb_is_rw_locked(iocb) \
......
...@@ -1335,6 +1335,7 @@ static ssize_t o2hb_region_dev_write(struct o2hb_region *reg, ...@@ -1335,6 +1335,7 @@ static ssize_t o2hb_region_dev_write(struct o2hb_region *reg,
ret = wait_event_interruptible(o2hb_steady_queue, ret = wait_event_interruptible(o2hb_steady_queue,
atomic_read(&reg->hr_steady_iterations) == 0); atomic_read(&reg->hr_steady_iterations) == 0);
if (ret) { if (ret) {
/* We got interrupted (hello ptrace!). Clean up */
spin_lock(&o2hb_live_lock); spin_lock(&o2hb_live_lock);
hb_task = reg->hr_task; hb_task = reg->hr_task;
reg->hr_task = NULL; reg->hr_task = NULL;
...@@ -1345,7 +1346,16 @@ static ssize_t o2hb_region_dev_write(struct o2hb_region *reg, ...@@ -1345,7 +1346,16 @@ static ssize_t o2hb_region_dev_write(struct o2hb_region *reg,
goto out; goto out;
} }
/* Ok, we were woken. Make sure it wasn't by drop_item() */
spin_lock(&o2hb_live_lock);
hb_task = reg->hr_task;
spin_unlock(&o2hb_live_lock);
if (hb_task)
ret = count; ret = count;
else
ret = -EIO;
out: out:
if (filp) if (filp)
fput(filp); fput(filp);
...@@ -1523,6 +1533,15 @@ static void o2hb_heartbeat_group_drop_item(struct config_group *group, ...@@ -1523,6 +1533,15 @@ static void o2hb_heartbeat_group_drop_item(struct config_group *group,
if (hb_task) if (hb_task)
kthread_stop(hb_task); kthread_stop(hb_task);
/*
* If we're racing a dev_write(), we need to wake them. They will
* check reg->hr_task
*/
if (atomic_read(&reg->hr_steady_iterations) != 0) {
atomic_set(&reg->hr_steady_iterations, 0);
wake_up(&o2hb_steady_queue);
}
config_item_put(item); config_item_put(item);
} }
...@@ -1665,7 +1684,67 @@ void o2hb_setup_callback(struct o2hb_callback_func *hc, ...@@ -1665,7 +1684,67 @@ void o2hb_setup_callback(struct o2hb_callback_func *hc,
} }
EXPORT_SYMBOL_GPL(o2hb_setup_callback); EXPORT_SYMBOL_GPL(o2hb_setup_callback);
int o2hb_register_callback(struct o2hb_callback_func *hc) static struct o2hb_region *o2hb_find_region(const char *region_uuid)
{
struct o2hb_region *p, *reg = NULL;
assert_spin_locked(&o2hb_live_lock);
list_for_each_entry(p, &o2hb_all_regions, hr_all_item) {
if (!strcmp(region_uuid, config_item_name(&p->hr_item))) {
reg = p;
break;
}
}
return reg;
}
static int o2hb_region_get(const char *region_uuid)
{
int ret = 0;
struct o2hb_region *reg;
spin_lock(&o2hb_live_lock);
reg = o2hb_find_region(region_uuid);
if (!reg)
ret = -ENOENT;
spin_unlock(&o2hb_live_lock);
if (ret)
goto out;
ret = o2nm_depend_this_node();
if (ret)
goto out;
ret = o2nm_depend_item(&reg->hr_item);
if (ret)
o2nm_undepend_this_node();
out:
return ret;
}
static void o2hb_region_put(const char *region_uuid)
{
struct o2hb_region *reg;
spin_lock(&o2hb_live_lock);
reg = o2hb_find_region(region_uuid);
spin_unlock(&o2hb_live_lock);
if (reg) {
o2nm_undepend_item(&reg->hr_item);
o2nm_undepend_this_node();
}
}
int o2hb_register_callback(const char *region_uuid,
struct o2hb_callback_func *hc)
{ {
struct o2hb_callback_func *tmp; struct o2hb_callback_func *tmp;
struct list_head *iter; struct list_head *iter;
...@@ -1681,6 +1760,12 @@ int o2hb_register_callback(struct o2hb_callback_func *hc) ...@@ -1681,6 +1760,12 @@ int o2hb_register_callback(struct o2hb_callback_func *hc)
goto out; goto out;
} }
if (region_uuid) {
ret = o2hb_region_get(region_uuid);
if (ret)
goto out;
}
down_write(&o2hb_callback_sem); down_write(&o2hb_callback_sem);
list_for_each(iter, &hbcall->list) { list_for_each(iter, &hbcall->list) {
...@@ -1702,16 +1787,21 @@ int o2hb_register_callback(struct o2hb_callback_func *hc) ...@@ -1702,16 +1787,21 @@ int o2hb_register_callback(struct o2hb_callback_func *hc)
} }
EXPORT_SYMBOL_GPL(o2hb_register_callback); EXPORT_SYMBOL_GPL(o2hb_register_callback);
void o2hb_unregister_callback(struct o2hb_callback_func *hc) void o2hb_unregister_callback(const char *region_uuid,
struct o2hb_callback_func *hc)
{ {
BUG_ON(hc->hc_magic != O2HB_CB_MAGIC); BUG_ON(hc->hc_magic != O2HB_CB_MAGIC);
mlog(ML_HEARTBEAT, "on behalf of %p for funcs %p\n", mlog(ML_HEARTBEAT, "on behalf of %p for funcs %p\n",
__builtin_return_address(0), hc); __builtin_return_address(0), hc);
/* XXX Can this happen _with_ a region reference? */
if (list_empty(&hc->hc_item)) if (list_empty(&hc->hc_item))
return; return;
if (region_uuid)
o2hb_region_put(region_uuid);
down_write(&o2hb_callback_sem); down_write(&o2hb_callback_sem);
list_del_init(&hc->hc_item); list_del_init(&hc->hc_item);
......
...@@ -69,8 +69,10 @@ void o2hb_setup_callback(struct o2hb_callback_func *hc, ...@@ -69,8 +69,10 @@ void o2hb_setup_callback(struct o2hb_callback_func *hc,
o2hb_cb_func *func, o2hb_cb_func *func,
void *data, void *data,
int priority); int priority);
int o2hb_register_callback(struct o2hb_callback_func *hc); int o2hb_register_callback(const char *region_uuid,
void o2hb_unregister_callback(struct o2hb_callback_func *hc); struct o2hb_callback_func *hc);
void o2hb_unregister_callback(const char *region_uuid,
struct o2hb_callback_func *hc);
void o2hb_fill_node_map(unsigned long *map, void o2hb_fill_node_map(unsigned long *map,
unsigned bytes); unsigned bytes);
void o2hb_init(void); void o2hb_init(void);
......
此差异已折叠。
...@@ -77,4 +77,9 @@ struct o2nm_node *o2nm_get_node_by_ip(__be32 addr); ...@@ -77,4 +77,9 @@ struct o2nm_node *o2nm_get_node_by_ip(__be32 addr);
void o2nm_node_get(struct o2nm_node *node); void o2nm_node_get(struct o2nm_node *node);
void o2nm_node_put(struct o2nm_node *node); void o2nm_node_put(struct o2nm_node *node);
int o2nm_depend_item(struct config_item *item);
void o2nm_undepend_item(struct config_item *item);
int o2nm_depend_this_node(void);
void o2nm_undepend_this_node(void);
#endif /* O2CLUSTER_NODEMANAGER_H */ #endif /* O2CLUSTER_NODEMANAGER_H */
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册