@@ -10,6 +10,8 @@ After you deploy OceanBase Deployer (OBD), you can run the `obd demo` command to
- At least 54 GB of disk space is available on the server.
- Your server can be connected to the network, or there are installation packages required for deployment.
> **Note**
>
> If the foregoing prerequisites are not met, see [Use OBD to start an OceanBase cluster](../3.user-guide/2.start-the-oceanbase-cluster-by-using-obd.md).
For more information about the relevant configuration items in the configuration file, refer to [Configuration file description](../../4.configuration-file-description.md).
> **Notice**
>
> This command supports only level-1 configurations under global that are specified by using options.
OBD provides multiple-level commands. You can use the`-h/--help` option to view the help information of sub-commands.
OBD provides multiple-level commands. You can use the`-h/--help` option to view the help information of sub-commands. Similarly, you can also use `-v/--verbose` to view the detailed execution process of commands when the execution of sub commands reports an error.
A deployment configuration is the minimum unit for OBD cluster commands. A deployment configuration is a `yaml` file. It contains all configuration information of a deployment, including the server login information, component information, component configuration information, and component server list.
To start a cluster by using OBD, you must register the deployment configuration of your cluster to OBD. You can run the `obd cluster edit-config` command to create an empty deployment configuration or run the `obd cluster deploy -c config` command to import a deployment configuration.
## `obd cluster autodeploy`
## obd cluster autodeploy
When you pass a simple configuration file to OBD, OBD will automatically generate a complete configuration file with the maximum specifications based on the resources of the target server, and then deploy and start a cluster on the target server.
`deploy name` specifies the name of the deployment configuration file.
The `deploy name` parameter specifies the name of the deployed cluster. You can consider it as an alias for the configuration file.
The following table describes the corresponding options.
| Option | Required | Data type | Default value | Description |
--- | --- | --- |--- |---
|----|-----|-----|----|----|
| -c/--config | Yes | string | None | Specifies the yaml file used for deployment and registers the deployment configuration to OBD. <br>When the `deploy name` already exists, OBD will check the status of the existing deployment configuration. If the existing deployment configuration has not been applied, it will be overwritten. If the existing deployment configuration is in use, an error will be returned. |
| -f/--force | No | bool | false | Specifies whether to forcibly clear the working directory. <br>When the component requires an empty working directory but this option is disabled, an error will be returned if the working directory is not empty. |
| -C/--clean | No | bool | false | Specifies whether to clear the working directory. When the working directory (`home_path`) belongs to the current operating user and this option is true, the working directory will be cleared. |
| -U/--ulp/--unuselibrepo | No | bool | false | Specifies whether to prevent OBD from automatically taking actions when dependencies are missing. If this option is disabled and OBD detects that some dependencies are missing, OBD will automatically search for the corresponding libs mirrors and install them. If this option is enabled, the **unuse_lib_repository: true** field will be added to the corresponding configuration file. You can also add the **unuse_lib_repository: true** field to the configuration file to enable this option. |
| -A/--act/--auto-create-tenant | No | bool | false | Specifies whether to enable OBD to create the `test` tenant during the bootstrap by using all available resources of the cluster. If this option is enabled, the **auto_create_tenant: true** field will be added to the corresponding configuration file. You can also add the **auto_create_tenant: true** field to the configuration file to enable this option. |
| -s/--strict-check | No | bool | false | Some components will do relevant checks before starting. It will issue an alarm when the check fails, but it will not force the process to stop. Using this option can return an error and directly exit the process when the component pre-check fails. We recommend that you enable this option to avoid startup failures due to insufficient resources. |
## `obd cluster edit-config`
## obd cluster edit-config
Modifies a deployment configuration or creates one when the specified deployment configuration does not exist.
```shell
obd cluster edit-config <deploy name>
# example
obd cluster edit-config test
```
`deploy name` specifies the name for the deployment configuration file.
The `deploy name` parameter specifies the name of the deployed cluster. You can consider it as an alias for the configuration file.
## `obd cluster deploy`
## obd cluster deploy
Deploys a cluster based on the deployment configuration file. Based on the deployment configuration file, this command finds the matching mirror, then installs the mirror in a local repository. This process is called local installation.
Then, OBD distributes the components of the required version in the local repository to the target server. This process is called remote installation.
...
...
@@ -45,39 +52,46 @@ This command allows you to deploy a cluster based on a deployment configuration
`deploy name` specifies the name of the deployment configuration file.
The `deploy name` parameter specifies the name of the deployed cluster. You can consider it as an alias for the configuration file.
The following table describes the corresponding options.
| Option | Required | Data type | Default value | Description |
--- | --- | --- |--- |---
|----|-----|-----|----|----|
| -c/--config | No | string | None | Specifies the yaml file used for deployment and registers the deployment configuration to OBD. <br>If this option is enabled and a deployment configuration of the specified `deploy name` already exists, the existing deployment configuration will be overwritten. <br>If this option is not enabled, OBD will search for the registered deployment configuration of the specified `deploy name`. |
| -f/--force | No | bool | false | Specifies whether to forcibly clear the working directory. <br>When the component requires an empty working directory but this option is disabled, an error will be returned if the working directory is not empty. |
| -C/--clean | No | bool | false | Specifies whether to clear the working directory. When the working directory (`home_path`) belongs to the current operating user and this option is true, the working directory will be cleared. |
| -U/--ulp/--unuselibrepo | No | bool | false | Specifies whether to prevent OBD from automatically taking actions when dependencies are missing. If this option is disabled and OBD detects that some dependencies are missing, OBD will automatically search for the corresponding libs mirrors and install them. If this option is enabled, the **unuse_lib_repository: true** field will be added to the corresponding configuration file. You can also add the **unuse_lib_repository: true** field to the configuration file to enable this option. |
| -A/--act/--auto-create-tenant | No | bool | false | Specifies whether to enable OBD to create the `test` tenant during the bootstrap by using all available resources of the cluster. If this option is enabled, the **auto_create_tenant: true** field will be added to the corresponding configuration file. You can also add the **auto_create_tenant: true** field to the configuration file to enable this option. |
## `obd cluster start`
## obd cluster start
Starts a deployed cluster. If the cluster is started, OBD will return its status.
```shell
obd cluster start <deploy name> [flags]
# example
obd cluster start test-S
```
`deploy name` specifies the name of the deployment configuration file.
The `deploy name` parameter specifies the name of the deployed cluster. You can consider it as an alias for the configuration file.
This table describes the corresponding options.
| Option | Required | Data type | Default value | Description |
--- | --- | --- |--- | ---
| -s/--servers | No | string | | A list of machines, followed by the `name` value corresponding to `servers` in the `yaml` file, separated by `,`. Be used for specifying the start-up machines. If this option is disabled, all machines under the component will start without executing bootstrap. |
| -c/--components | No | string | | A list of components, separated by `,`. Be used for specifying the start-up components. If this option is disabled, all machines under the component will start without entering the running state. |
|----|-----|-----|----|----|
| -s/--servers | No | string | Empty | A list of machines, followed by the `name` value corresponding to `servers` in the `yaml` file, separated by `,`. If the `name` value is not configured after `servers`, the `ip` value is used. Be used for specifying the machines need to be started. If this option is disabled, all machines under the component will start without executing bootstrap. |
| -c/--components | No | string | Empty | A list of components, separated by `,`. Be used for specifying the components need to be started. If this option is disabled, all machines under the component will start without entering the running state. |
| --wop/--without-parameter | No | bool | false | Start without parameters. The node does not respond to this option when this node is starting for the first time. |
| -S/--strict-check | No | bool | false | Some components will do relevant checks before starting. OBD will throw an error when the check fails, but OBD will not force the process to stop. Using this option can return an error and directly exit the process when the component pre-check fails. We recommend that you enable this option to avoid startup failures due to insufficient resources. |
## `obd cluster list`
## obd cluster list
Shows the status of all clusters that have been registered to OBD. The cluster names are specified by the deploy name parameter.
...
...
@@ -85,107 +99,140 @@ Shows the status of all clusters that have been registered to OBD. The cluster n
obd cluster list
```
## `obd cluster display`
## obd cluster display
Shows the status of the specified cluster.
```shell
obd cluster display <deploy name>
# example
obd cluster display test
```
`deploy name` specifies the name of the deployment configuration file.
The `deploy name` parameter specifies the name of the deployed cluster. You can consider it as an alias for the configuration file.
## `obd cluster reload`
## obd cluster reload
Reloads a running cluster. After you modify the configuration information of a running cluster by using the `edit-config` command, you can run the `reload` command to let your modification take effect.
> **NOTE:** Some configuration items may not take effect after you run the `reload` command. You need to restart or even redeploy the cluster for these configuration items to take effect. Do operations based on the result returned by the `edit-config` command.
> **NOTE:**
>
> Some configuration items may not take effect after you run the `reload` command. You need to restart or even redeploy the cluster for these configuration items to take effect. Do operations based on the result returned by the `edit-config` command.
```shell
obd cluster reload <deploy name>
# example
obd cluster reload test
```
`deploy name` specifies the name of the deployment configuration file.
The `deploy name` parameter specifies the name of the deployed cluster. You can consider it as an alias for the configuration file.
## `obd cluster restart`
## obd cluster restart
Restarts a running cluster. By default, OBD restarts without any parameters. After you run the edit-config command to modify the configuration information of a running cluster, you can run the `restart` command for the modification to take effect.
> **NOTE:** Some configuration items may not take effect after you run the `restart` command. You even need to redeploy the cluster for some configuration items to take effect. Perform operations based on the result returned by the edit-config command.
> **NOTE:**
>
> Some configuration items may not take effect after you run the `restart` command. You even need to redeploy the cluster for some configuration items to take effect. Perform operations based on the result returned by the edit-config command.
```shell
obd cluster restart <deploy name>
# example
obd cluster restart test-c obproxy-ce --wp
```
`deploy name` specifies the name of the deployment configuration file.
The `deploy name` parameter specifies the name of the deployed cluster. You can consider it as an alias for the configuration file.
This table describes the corresponding options.
| Option | Required | Data type | Default value | Description |
--- | --- | --- |--- | ---
| -s/--servers | No | string | | A list of machines, followed by the `name` value corresponding to `servers` in the `yaml` file, separated by `,`. |
| -c/--components | No | string | | A list of components, separated by `,`. Be used for specifying the start-up components. If this option is disabled, all machines under the component will start without entering the running state. |
|----|-----|-----|----|----|
| -s/--servers | No | string | Empty | A list of machines, followed by the `name` value corresponding to `servers` in the `yaml` file, separated by `,`. If the `name` value is not configured after `servers`, the `ip` value is used. Be used to specify the machines need to be restarted. |
| -c/--components | No | string | Empty | A list of components, separated by `,`. Be used to specify the components need to be restarted. If this option is disabled, all machines under the component will start without entering the running state. |
| --wp/--with-parameter | No | bool | false | Restarts OBD with parameters. This option makes the parameters valid when you restart OBD. |
## `obd cluster redeploy`
## obd cluster redeploy
Redeploys a running cluster. After you run the `edit-config` command to modify the configuration information of a running cluster, you can run the `redeploy` command to let your modification take effect.
> **NOTE:** This command destroys the cluster and redeploys it. Data in the cluster will be lost. Please back up the data before you run this command.
> **NOTE:**
>
> This command destroys the cluster and redeploys it. Data in the cluster will be lost. Please back up the data before you run this command.
```shell
obd cluster redeploy <deploy name>
obd cluster redeploy <deploy name> [-f]
# example
obd cluster redeploy test-f
```
`deploy name` specifies the name of the deployment configuration file.
The `deploy name` parameter specifies the name of the deployed cluster. You can consider it as an alias for the configuration file.
Before OBD redeploys the cluster, it will check for running processes. These processes may result from the failure of the `obd cluster start` command. They may also belong to other clusters when configurations of this cluster overlap with those of other clusters. If an ongoing process is found in the working directory, OBD will stop the `obd cluster redeploy` command.
## `obd cluster stop`
`-f` is `--force-kill`. This option specifies whether to forcibly stop running processes in the working directory. If this option is enabled, OBD will forcibly stop the ongoing processes and run the `obd cluster redeploy` command. `-f` is optional. Its data type is `bool`. This option is disabled by default.
## obd cluster stop
Stops a running cluster.
```shell
obd cluster stop <deploy name>
# example
obd cluster stop test-s server1
```
`deploy name` specifies the name of the deployment configuration file.
The `deploy name` parameter specifies the name of the deployed cluster. You can consider it as an alias for the configuration file.
This table describes the corresponding options.
| Option | Required | Data type | Default value | Description |
--- | --- | --- |--- | ---
| -s/--servers | No | string | | A list of machines, followed by the `name` value corresponding to `servers` in the `yaml` file, separated by `,`. Be used for specifying the start-up machines. |
| -c/--components | No | string | | A list of components, separated by `,`. Be used for specifying the start-up components. If not all components under the configuration start, this configuration will not enter the stopped state. |
|----|-----|-----|----|----|
| -s/--servers | No | string | Empty | A list of machines, followed by the `name` value corresponding to `servers` in the `yaml` file, separated by `,`. If the `name` value is not configured after `servers`, the `ip` value is used. Be used to specify the machines need to be stopped |
| -c/--components | No | string | Empty | A list of components, separated by `,`. Be used to specify the components need to be stopped. If not all components under the configuration start, this configuration will not enter the stopped state. |
## `obd cluster destroy`
## obd cluster destroy
Destroys a deployed cluster. If the cluster is running state, this command will first try to execute `stop` and then `destroy` after success.
```shell
obd cluster destroy <deploy name> [-f]
# example
obd cluster destroy test-f
```
`deploy name` specifies the name of the deployment configuration file.
The `deploy name` parameter specifies the name of the deployed cluster. You can consider it as an alias for the configuration file.
Before OBD destroys the cluster, it will check for running processes. These processes may result from the failure of the `obd cluster start` command. They may also belong to other clusters when configurations of this cluster overlap with those of other clusters. If an ongoing process is found in the working directory, OBD will stop the `obd cluster destroy` command.
`-f` is `--force-kill`. This option specifies whether to forcibly stop running processes in the working directory. Before OBD destroys the cluster, it will check for running processes. These processes may result from the failure of the **start** command. They may also belong to other clusters when configurations of this cluster overlap with those of other clusters. If an ongoing process is found in the working directory, OBD will stop the **destroy** command. However, if this option is enabled, OBD will forcibly stop the ongoing processes and run the **destroy** command. `-f` is optional. Its data type is `bool`. This option is disabled by default.
`-f` is `--force-kill`. This option specifies whether to forcibly stop running processes in the working directory. If this option is enabled, OBD will forcibly stop the ongoing processes and run the `obd cluster destroy` command. `-f` is optional. Its data type is `bool`. This option is disabled by default.
| -c/--component | Yes | string | Null | The name of the component whose repository is to be replaced. |
|--hash | Yes | string | Null | The target repository. It must be of the same version as the current repository. |
|--hash | Yes | string | Null | The hash value of the target repository. The target repository must be of the same version as the current repository. |
| -f/--force | No | Bool | false | Specifies whether to enable forced replacement even if the restart fails. |
## `obd cluster tenant create`
## obd cluster tenant create
Creates a tenant. This command applies only to an OceanBase cluster. This command automatically creates resource units and resource pools.
`deploy name` specifies the name of the deployment configuration file.
The `deploy name` parameter specifies the name of the deployed cluster. You can consider it as an alias for the configuration file.
This table describes the corresponding options.
| Option | Required | Data type | Default value | Description |
--- | --- | --- |--- | ---
| -n/--tenant-name | No | string | test | The tenant name. OBD will automatically generate resource units and resource pools with unique names based on the tenant name. |
|----|-----|-----|----|----|
| -t/-n/--tenant-name | No | string | test | The tenant name. OBD will automatically generate resource units and resource pools with unique names based on the tenant name. |
| --max-cpu | No | float | 0 | The maximum number of CPU cores available for the tenant. When this option is set to 0, all available CPU cores of the cluster can be used by the tenant. |
| --min-cpu | No | float | 0 | The minimum number of CPU cores available for the tenant. When this option is set to 0, the minimum number of CPU cores is the same as the maximum number of CPU cores. |
| --max-memory | No | int | 0 | The maximum memory capacity available for the tenant. When this option is set to 0, all available memory capacity of the cluster can be used by the tenant. When the actual value is less than 1 GB, an error is returned. |
| --min-memory | No | int | 0 | The minimum memory capacity available for the tenant. When this option is set to 0, the minimum memory capacity is the same as the maximum memory capacity. |
| --max-disk-size | No | int | 0 | The maximum disk space available for the tenant. When this option is set to 0, all available disk space of the cluster can be used by the tenant. If the actual value is less than 512 MB, an error is returned. |
| --max-iops | No | int | 128 | The maximum IOPS for the tenant. Value range: [128, +∞). |
| --min-iops | No | int | 0 | The minimum IOPS for the tenant. Value range: [128, +∞). When this option is set to 0, the minimum IOPS is the same as the maximum IOPS. |
| --max-session-num | No | int | 64 | The maximum number of sessions allowed for the tenant. Value range: [64, +∞). |
| --max-memory | No | int | 0 | The maximum memory capacity available for the tenant. When this option is set to 0, all available memory capacity of the cluster can be used by the tenant. When the actual value is less than 1 GB, an error is returned. <blockquote> Not supported after the OceanBase database V4.0.0.0. You can use `--memory-size` instead. </blockquote> |
| --min-memory | No | int | 0 | The minimum memory capacity available for the tenant. When this option is set to 0, the minimum memory capacity is the same as the maximum memory capacity.<blockquote> Not supported after the OceanBase database V4.0.0.0. You can use `--memory-size` instead. </blockquote> |
| --memory-size | No | int | 0 | The available memory unit size of the tenant. Supported since the OceanBase database V4.0.0.0. |
| --max-disk-size | No | int | 0 | The maximum disk space available for the tenant. When this option is set to 0, all available disk space of the cluster can be used by the tenant. If the actual value is less than 512 MB, an error is returned. <blockquote> Not supported after the OceanBase database V4.0.0.0. </blockquote> |
| --log-disk-size | No | int | 0 | Specifies the tenant's Unit log disk size. The default value is 3 times the memory specification value. The minimum value is `2G`. |
| --max-iops | No | int | <ul><li> The default value is 128 when the OceanBase database version is lower than V4.0.0.0. </li><li> The default value is 1024 after the OceanBase database V4.0.0.0. </li></ul> | The maximum IOPS for the tenant. The value range of this parameter can be divided into the following two cases according to the OceanBase version. <ul><li>Value range is [128,+∞) when the OceanBase database version is lower than V4.0.0.0. </li><li> Value range is [1024,+∞) after the OceanBase database V4.0.0.0. </li></ul> |
| --min-iops | No | int | 0 | The minimum IOPS for the tenant. The value range is the same as `--max-iops`. When this option is set to 0, the minimum IOPS is the same as the maximum IOPS. |
| --iops-weight | No | int | 0 | Specifies the weight of the tenant's IOPS. Supported since the OceanBase database V4.0.0.0. |
| --max-session-num | No | int | 64 | The maximum number of sessions allowed for the tenant. Value range: [64, +∞). <blockquote> Not supported after the OceanBase database V4.0.0.0. </blockquote> |
| --unit-num | No | int | 0 | The number of units to be created in a zone. It must be less than the number of OBServers in the zone. When this option is set to 0, the maximum value is used. |
| -z/--zone-list | No | string | | Specifies the list of zones of the tenant. Separate multiple zones with commas (,). If this option is not specified, all zones of the cluster are included. |
| -z/--zone-list | No | string | Empty | Specifies the list of zones of the tenant. Separate multiple zones with commas (,). If this option is not specified, all zones of the cluster are included. |
| --primary-zone | No | string | RANDOM | The primary zone of the tenant. |
| --charset | No | string | | The character set of the tenant. |
| --collate | No | string | | The collation of the tenant. |
| --charset | No | string | Empty | The character set of the tenant. |
| --collate | No | string | Empty | The collation of the tenant. |
| --replica-num | No | int | 0 | The number of replicas of the tenant. When this option is set to 0, the number of replicas is the same as that of zones. |
| --logonly-replica-num | No | string | 0 | The number of log replicas of the tenant. When this option is set to 0, the number of log replicas is the same as that of replicas. |
| --tablegroup | No | string | | The default table group of the tenant. |
| --locality | No | string | | The distribution status of replicas across zones. For example, F@z1,F@z2,F@z3,R@z4 means that z1, z2, and z3 are full-featured replicas and z4 is a read-only replica. |
| --tablegroup | No | string | Empty | The default table group of the tenant. |
| --locality | No | string | Empty | The distribution status of replicas across zones. For example, F@z1,F@z2,F@z3,R@z4 means that z1, z2, and z3 are full-featured replicas and z4 is a read-only replica. |
| -s/--variables | No | string | ob_tcp_invited_nodes='%' | The system variables of the tenant. |
## `obd cluster tenant drop`
## obd cluster tenant drop
Deletes a tenant. This command applies only to an OceanBase cluster. This command automatically deletes the corresponding resource units and resource pools.
```shell
obd cluster tenant drop <deploy name> [-n <tenant name>]
# example
obd cluster tenant drop test-n obmysql
```
`deploy name` specifies the name of the deployment configuration file.
The `deploy name` parameter specifies the name of the deployed cluster. You can consider it as an alias for the configuration file.
`-n` is `--tenant-name`. This option specifies the name of the tenant to be deleted. This option is required.
...
...
@@ -256,6 +331,8 @@ You can run this command to change the configuration style.
OBD provides multiple-level commands. You can use the`-h/--help` option to view the help information of sub-commands.
OBD provides multiple-level commands. You can use the`-h/--help` option to view the help information of sub-commands. Similarly, you can also use `-v/--verbose` to view the detailed execution process of commands when the execution of sub commands reports an error.
## `obd mirror clone`
## obd mirror clone
Copy an RPM package to the local mirror repository. You can run the corresponding OBD cluster command to start the mirror.
...
...
@@ -14,7 +14,7 @@ obd mirror clone <path> [-f]
The `-f` option is `--force`. `-f` is optional. This option is disabled by default. If it is enabled and a mirror of the same name exists in the repository, the copied mirror will forcibly overwrite the existing one.
## `obd mirror create`
## obd mirror create
Creates a mirror based on the local directory. When OBD starts a user-compiled open-source OceanBase software, you can run this command to add the compilation output to the local repository. Then, you can run the corresponding `obd cluster` command to start the mirror.
...
...
@@ -22,19 +22,19 @@ Creates a mirror based on the local directory. When OBD starts a user-compiled o
For example, you can [compile an OceanBase cluster based on the source code](https://www.oceanbase.com/en/docs/community-observer-en-10000000000209369). Then, you can run the `make DESTDIR=./ install && obd mirror create -n oceanbase-ce -V 3.1.0 -p ./usr/local` command to add the compilation output to the local repository of OBD.
For example, you can [compile an OceanBase cluster based on the source code](https://en.oceanbase.com/docs/community-observer-en-10000000000209369). Then, you can run the `make DESTDIR=./ install && obd mirror create -n oceanbase-ce -V 3.1.0 -p ./usr/local` command to add the compilation output to the local repository of OBD.
This table describes the corresponding options.
| Option | Required | Data type | Description |
--- | --- | --- |---
| -n/--name | Yes | string | The component name. If you want to compile an OceanBase cluster, set this option to oceanbase-ce. If you want to compile ODP, set this option to obproxy. |
|----|-----|-----|----|
| -n/--name | Yes | string | The component name. If you want to compile an OceanBase cluster, set this option to oceanbase-ce. If you want to compile ODP, set this option to obproxy-ce. |
| -p/--path | Yes | string | The directory that stores the compilation output. OBD will automatically retrieve files required by the component from this directory. |
| -t/--tag | No | string | The mirror tags. You can define one or more tags for the created mirror. Separate multiple tags with commas (,). |
| -f/--force | No | bool | Specifies whether to forcibly overwrite an existing mirror or tag. This option is disabled by default. |
## `obd mirror list`
## obd mirror list
Shows the mirror repository or mirror list.
...
...
@@ -44,7 +44,7 @@ obd mirror list [mirror repo name]
`mirror repo name` specifies the mirror repository name. This parameter is optional. When it is not specified, all mirror repositories will be returned. When it is specified, only the specified mirror repository will be returned.
## `obd mirror update`
## obd mirror update
Synchronizes the information of all remote mirror repositories.
...
...
@@ -52,7 +52,7 @@ Synchronizes the information of all remote mirror repositories.
obd mirror update
```
## `obd mirror disable`
## obd mirror disable
Disable remote mirror repositories. To disable all the remote mirror repositories, run the `obd mirror disable remote` command.
OBD provides multiple-level commands. You can use the`-h/--help` option to view the help information of sub-commands.
OBD provides multiple-level commands. You can use the`-h/--help` option to view the help information of sub-commands. Similarly, you can also use `-v/--verbose` to view the detailed execution process of commands when the execution of sub commands reports an error.
## `obd test mysqltest`
## obd test mysqltest
Runs the mysqltest on the specified node of an OcecanBase cluster or ODP. To run the mysqltest, you must install OBClient.
...
...
@@ -10,18 +10,18 @@ Runs the mysqltest on the specified node of an OcecanBase cluster or ODP. To run
obd test mysqltest <deploy name> [--test-set <test-set>] [flags]
```
`deploy name` specifies the name of the deployment configuration file.
The `deploy name` parameter specifies the name of the deployed cluster. You can consider it as an alias for the configuration file.
This table describes the corresponding options.
| Option | Required | Data type | Default value | Description |
--- | --- | --- |--- | ---
| -c/--component | No | string | | The name of the component to be tested. Valid values: `oceanbase-ce`, `oceanbase`, `obproxy-ce` and `obproxy`. If you do not specify a value, the existence of `obproxy`, `obproxy-ce`, `oceanbase`, `oceanbase-ce` is checked sequentially. The traversal stops when a component is found, and the component is then tested. |
| --test-server | No | string | The first node of the specified component. | It must be the name of a node of the specified component. |
| --user | No | string | root | The username for running the test. |
| --password | No | string | | The password for running the test. |
|----|-----|-----|----|----|
| --component | No | string | Empty | The name of the component to be tested. Valid values: `oceanbase-ce`, `oceanbase`, `obproxy-ce` and `obproxy`. If you do not specify a value, the existence of `obproxy`, `obproxy-ce`, `oceanbase`, `oceanbase-ce` is checked sequentially. The traversal stops when a component is found, and the component is then tested. |
| --test-server | No | string | The first node of the specified component. | Machine to be tested, followed by the `name` value corresponding to `servers` in the `yaml` file. If the `name` value is not configured after `servers`, the `ip` value is used. It must be the name of a node of the specified component. |
| --user | No | string | admin | The username for running the test. |
| --password | No | string | admin | The password for running the test. |
| --database | No | String | test | The database where the test is to be performed. |
| --mysqltest-bin | No | string | mysqltest | The path of the mysqltest binary file. |
| --mysqltest-bin | No | string | /u01/obclient/bin/mysqltest | The path of the mysqltest binary file. |
| --obclient-bin | No | string | obclient | The path of the OBClient binary file. |
| --test-dir | No | string | ./mysql_test/t | The directory that stores the test file required for the mysqltest. If no test file is found in the directory, OBD will search for a built-in test file. |
| --test-file-suffix | No | String | .test | The suffix of the test file required by mysqltest. |
...
...
@@ -35,12 +35,12 @@ This table describes the corresponding options.
| --test-set | No | string | None | The array of test cases. Separate multiple cases with commas (`,`). |
| --exclude | No | String | None | The array of test cases to be excluded. Separate multiple cases with commas (`,`). |
| --test-pattern | No | string | None | The regular expression for matching test file names. Test cases matching the regular expression will overwrite the values of the test-set option. |
| --suite | No | string | None | The suite array. A suite contains multiple tests. Separate multiple tests with commas (,). If this option is enabled, the --test-pattern and --test-set options will become invalid. |
| --suite | No | string | None | The suite array. A suite contains multiple tests. Separate multiple tests with commas (`,`). If this option is enabled, the --test-pattern and --test-set options will become invalid. |
| --suite-dir | No | string | ./mysql_test/test_suite | The directory that stores the suite directory. If no suite directory is found in the directory, OBD will search for a built-in suite directory. |
| --all | No | bool | false | Specifies whether to run all test cases in the directory specified for the --suite-dir option. The --suite-dir option specifies the directory that stores the suite directory. |
| --need-init | No | bool | false | Specifies whether to run the init sql files. Before OBD runs the mysqltest on a new cluster, it may run some initialization files. For example, it may create some accounts or tenants required for running the test cases. The --suite-dir option specifies the directory that stores the suite directory. This option is disabled by default. |
| --init-sql-dir | No | string | ../ | The directory that stores the init sql files. If no init sql file is found in the directory, OBD will search for built-in init sql files. |
| --init-sql-files | No | string | | The init sql files to be run when initialization is required. Separate multiple init sql files with commas (,). If this option is not specified but initialization is required, OBD will run the built-in init files based on the cluster configurations. |
| --init-sql-dir | No | string | ./ | The directory that stores the init sql files. If no init sql file is found in the directory, OBD will search for built-in init sql files. |
| --init-sql-files | No | string | Empty | The init sql files to be run when initialization is required. Separate multiple init sql files with commas (`,`). If this option is not specified but initialization is required, OBD will run the built-in init files based on the cluster configurations. |
| --auto-retry | No | bool | false | Specifies whether to automatically redeploy the cluster for a retry after a test fails. |
| --psmall | No | Bool | false | Specifies whether to execute the cases in psmall mode. |
| --slices | No | Int | Empty | The number of slices into which the case to be executed is divided. |
...
...
@@ -55,8 +55,10 @@ This table describes the corresponding options.
| --log-pattern | No | String | *.log | The regular expression that is used to match log file names. Files that match the expression are collected. |
| --case-timeout | No | Int | 3600 | The timeout period for a single test of mysqltest. |
| --disable-reboot | No | Bool | false | Specifies whether to disable restart during the test. |
| --collect-components | No | string | Empty | Specify the components to collect logs. Multiple components are separated by commas (`,`). |
| --init-only | No | bool | false | If this option is true, it means that only init SQL is executed. |
## `obd test sysbench`
## obd test sysbench
Runs the Sysbench test on the specified node of an OcecanBase cluster or ODP. To run the Sysbench test, you must install OBClient and ob-sysbench.
...
...
@@ -64,17 +66,17 @@ Runs the Sysbench test on the specified node of an OcecanBase cluster or ODP. T
obd test sysbench <deploy name> [flags]
```
`deploy name` specifies the name of the deployment configuration file.
The `deploy name` parameter specifies the name of the deployed cluster. You can consider it as an alias for the configuration file.
This table describes the corresponding options.
| Option | Required | Data type | Default value | Description |
--- | --- | --- |--- | ---
| -c/--component | No | string | | The name of the component to be tested. Valid values: `oceanbase-ce`, `oceanbase`, `obproxy-ce` and `obproxy`. If you do not specify a value, the existence of `obproxy`, `obproxy-ce`, `oceanbase`, `oceanbase-ce` is checked sequentially. The traversal stops when a component is found, and the component is then tested. |
| --test-server | No | string | The first node of the specified component. | It must be the name of a node of the specified component. |
|----|-----|-----|----|----|
| --component | No | string | Empty | The name of the component to be tested. Valid values: `oceanbase-ce`, `oceanbase`, `obproxy-ce` and `obproxy`. If you do not specify a value, the existence of `obproxy`, `obproxy-ce`, `oceanbase`, `oceanbase-ce` is checked sequentially. The traversal stops when a component is found, and the component is then tested. |
| --test-server | No | string | The first node of the specified component. | Machine to be tested, followed by the `name` value corresponding to `servers` in the `yaml` file. If the `name` value is not configured after `servers`, the `ip` value is used. It must be the name of a node of the specified component. |
| --user | No | string | root | The username for running the test. |
| --password | No | string | | The password for running the test. |
| --tenant | No | string | test | The tenant name for running the test. |
| --password | No | string | Empty | The password for running the test. |
| -t/--tenant | No | string | test | The tenant name used to perform the test. You need ensure this tenant has been created. |
| --database | No | string | test | The database for performing the test. |
| --obclient-bin | No | string | obclient | The path of the OBClient binary file. |
| --sysbench-bin | No | string | sysbench | The path of the Sysbench binary file. |
...
...
@@ -83,15 +85,17 @@ This table describes the corresponding options.
| --tables | No | int | 30 | The number of tables to be initialized. |
| --table-size | No | int | 20000 | The data size of each table to be initialized. |
| --threads | No | int | 16 | The number of threads to be started. |
| --time | No | int | 60 | The running duration. When this option is set to 0, the running duration is not limited. |
| --interval | No | int | 10 | The logging interval, in seconds. |
| --mysql-ignore-errors | No | String | 1062 | The error code to be ignored. Separate multiple error codes with commas (,). The value `all` indicates to ignore all errors. |
| --time | No | int | 60 | Test execution time in seconds. When this option is set to 0, the running duration is not limited. |
| --interval | No | int | 10 | The logging interval, in seconds. Disables intermediate reports When this option is set to `0`. |
| --mysql-ignore-errors | No | String | 1062 | The error code to be ignored. Separate multiple error codes with commas (`,`). The value `all` indicates to ignore all errors. |
| --events | No | int | 0 | The maximum number of requests. If this option is specified, the --time option is not needed. |
| --rand-type | No | string | | The random number generation function used for data access. Valid values: special, uniform, gaussian, and pareto. Default value: special, early value: uniform. |
| ---skip-trx | No | string | | Specifies whether to enable or disable a transaction in a read-only test. |
| --rand-type | No | string | Empty | The random number generation function used for data access. Valid values: special, uniform, gaussian, and pareto. Default value: special, early value: uniform. |
| ---skip-trx | No | string | Empty | Specifies whether to enable or disable a transaction in a read-only test. |
| --percentile | No | int | Empty | Percentile to calculate in latency statistics. Value range: [1,100]. 0 means to disable percentile calculations. |
| -S/--skip-cluster-status-check | No | bool | false | Skip cluster status check when the option is true. |
| -O/--optimization | No | int | 1 | The degree of auto-tuning. Valid values: `0`, `1`, and `2`. `0` indicates that auto-tuning is disabled. `1` indicates that the auto-tuning parameters that take effect without a cluster restart are modified. `2` indicates that all auto-tuning parameters are modified. If necessary, the cluster is restarted to make all parameters take effect. |
## `obd test tpch`
## obd test tpch
This section describes how to run the TPC-H test on the specified node of an OcecanBase cluster or ODP. To run the TPC-H test, you must install OBClient and obtpch.
TPC-H needs to specify an OceanBase target server as the execution target. Before executing the TPC-H test, OBD will transfer the data files required for the test to the specified directory of the specified machine. Please make sure that you have enough disk space on this machine because these files may be relatively large.
...
...
@@ -101,27 +105,29 @@ Of course, you can prepare the data files on the target machine in advance and t
obd test tpch <deploy name> [flags]
```
`deploy name` specifies the name of the deployment configuration file.
The `deploy name` parameter specifies the name of the deployed cluster. You can consider it as an alias for the configuration file.
| Option | Required | Data type | Default value | Description |
--- | --- | --- |--- | ---
| --test-server | No | string | The first node of the specified component. | It must be the name of a node of the specified component. |
|----|-----|-----|----|----|
| --component | No | string | Empty | The name of the component to be tested. Valid values: `oceanbase-ce`, `oceanbase`, `obproxy-ce` and `obproxy`. If you do not specify a value, the existence of `obproxy`, `obproxy-ce`, `oceanbase`, `oceanbase-ce` is checked sequentially. The traversal stops when a component is found, and the component is then tested. |
| --test-server | No | string | The first node of the specified component. | Machine to be tested, followed by the `name` value corresponding to `servers` in the `yaml` file. If the `name` value is not configured after `servers`, the `ip` value is used. It must be the name of a node of the specified component. |
| --user | No | string | root | The username for running the test. |
| --password | No | string | | The password for running the test. |
| --tenant | No | string | test | The tenant name for running the test. |
| --password | No | string | Empty | The password for running the test. |
| -t/--tenant | No | string | test | The tenant name used to perform the test. You need ensure this tenant has been created. |
| --database | No | string | test | The database for performing the test. |
| --obclient-bin | No | string | obclient | The path of the OBClient binary file. |
| --dbgen-bin | No | string | /usr/local/tpc-h-tools/bin/dbgen | The path of the dbgen binary file. |
| --dss-config | No | string | /usr/local/tpc-h-tools/ | The directory that stores the dists.dss files. |
| --dbgen-bin | No | string | /usr/tpc-h-tools/tpc-h-tools/bin/dbgen | The path of the dbgen binary file. |
| --dss-config | No | string | /usr/tpc-h-tools/tpc-h-tools/ | The directory that stores the dists.dss files. |
| -s/--scale-factor | No | int | 1 | Automatically generate the scale of test data, the data is measured in Gigabytes. |
| --tmp-dir | No | string | ./tmp | Temporary directory when executing tpch. When enabled, this option will automatically generate test data, auto-tuned SQL files, log files for executing test SQL, and so on. |
| --ddl-path | No | string | | The path or directory of the ddl file. If it is empty, OBD will use the ddl file that comes with it. |
| --tbl-path | No | string | | The path or directory of the tbl file. If it is empty, use dbgen to generate test data. |
| --sql-path | No | string | | The path or directory of the sql file. If it is empty, OBD will use the sql file that comes with it. |
| --remote-tbl-dir | No | string | | The directory where the tbl is stored on the target observer, it is the absolute path. Please make sure that you have the read and write permissions to this directory when you start the server. This option is required when `--test-only` is not enabled. |
| --ddl-path | No | string | Empty | The path or directory of the ddl file. If it is empty, OBD will use the ddl file that comes with it. |
| --tbl-path | No | string | Empty | The path or directory of the tbl file. If it is empty, use dbgen to generate test data. |
| --sql-path | No | string | Empty | The path or directory of the sql file. If it is empty, OBD will use the sql file that comes with it. |
| --remote-tbl-dir | No | string | Empty | The directory where the tbl is stored on the target observer, it is the absolute path. Please make sure that you have the read and write permissions to this directory when you start the server. This option is required when `--test-only` is not enabled. |
| --test-only | No | bool | false | When you enable this option, initialization will not be done, only the test SQL is exectued. |
| --dt/--disable-transfer | No | bool | false | Disable transfer. When you enable this option, OBD will not transfer the local tbl to the remote remote-tbl-dir, and OBD will directly use the tbl file under the target machine remote-tbl-dir. |
| -O/--optimization | No | int | 1 | Auto tuning level. Off when 0. |
| -S/--skip-cluster-status-check | No | bool | false | Skip cluster status check when the option is true. |
| -O/--optimization | No | int | 1 | The degree of auto-tuning. Valid values: `0`, `1`, and `2`. `0` indicates that auto-tuning is disabled. `1` indicates that the auto-tuning parameters that take effect without a cluster restart are modified. `2` indicates that all auto-tuning parameters are modified. If necessary, the cluster is restarted to make all parameters take effect. |
## obd test tpcc
...
...
@@ -138,12 +144,12 @@ The `deploy name` parameter specifies the name of the deployed cluster. You can
The following table describes details about the available options.
| Option | Required | Data type | Default value | Description |
--- | --- | --- |--- | ---
|----|-----|-----|----|----|
| --component | No | string | Empty | The name of the component to be tested. Valid values: `oceanbase-ce`, `oceanbase`, `obproxy-ce` and `obproxy`. If you do not specify a value, the existence of `obproxy`, `obproxy-ce`, `oceanbase`, `oceanbase-ce` is checked sequentially. The traversal stops when a component is found, and the component is then tested. |
| --test-server | No | string | The name of the first node under the specified component. | The name of the node to be tested under the specified component. |
| --test-server | No | string | The name of the first node under the specified component. | Machine to be tested, followed by the `name` value corresponding to `servers` in the `yaml` file. If the `name` value is not configured after `servers`, the `ip` value is used. The name of the node to be tested under the specified component. |
| --user | No | string | root | The username used to perform the test. |
| --password | No | string | Empty | The user password used to perform the test. |
| --tenant | No | string | test | The tenant name used to perform the test. |
| -t/--tenant | No | string | test | The tenant name used to perform the test. You need ensure this tenant has been created. |
| --database | No | string | test | The database where the test is to be performed. |
| --obclient-bin | No | string | obclient | The path to the directory where the binary files of OBClient are stored. |
| --java-bin | No | string | java | The path to the directory where the Java binary files are stored. |
...
...
@@ -152,9 +158,10 @@ The following table describes details about the available options.
| --bmsql-jar | No | string | Empty | The path to the directory where the JAR file of BenchmarkSQL is stored. If you do not specify the path, and the BenchmarkSQL directory is not specified, the default installation directory generated by obtpcc is used. If the BenchmarkSQL directory is specified, the JAR file in the `<bmsql-dir>/dist` directory is used. |
| --bmsql-libs | No | string | Empty | If the BenchmarkSQL directory is specified, the JAR files in the `<bmsql-dir>/lib` and `<bmsql-dir>/lib/oceanbase` directories are used. If you use obtpcc, this option is not required. |
| --bmsql-sql-dir | No | string | Empty | The path to the directory where the SQL files for the TPC-C test are stored. If you do not specify the path, OceanBase Deployer (OBD) uses the SQL files that are automatically generated. |
| --warehouses | No | int | Empty | The number of warehouses for the TPC-C test data set. If you do not specify a value, the assigned value is 20 times the number of CPU cores allocated to the OceanBase cluster. |
| --warehouses | No | int | 10 | The number of warehouses for the TPC-C test data set. If you do not specify a value, the assigned value is 20 times the number of CPU cores allocated to the OceanBase cluster. |
| --load-workers | No | int | Empty | The number of concurrent worker threads for building the test data set. If you do not specify a value, the number of CPU cores per server or the size of tenant memory (GB)/2, whichever is smaller, is used. |
| --terminals | No | int | Empty | The number of virtual terminals to be used for the TPC-C test. If you do not specify a value, the number of CPU cores for the OceanBase cluster × 15 or the number of warehouses × 10, whichever is smaller, is used. |
| --run-mins | No | int | 10 | The amount of time allocated for the execution of the TPC-C test. |
| --test-only | No | bool | false | Specifies that the test is performed without data construction. |
| -S/--skip-cluster-status-check | No | bool | false | Skip cluster status check when the option is true. |
| -O/--optimization | No | int | 1 | The degree of auto-tuning. Valid values: `0`, `1`, and `2`. `0` indicates that auto-tuning is disabled. `1` indicates that the auto-tuning parameters that take effect without a cluster restart are modified. `2` indicates that all auto-tuning parameters are modified. If necessary, the cluster is restarted to make all parameters take effect. |
OceanBase Deployer (OBD) provides a series of tool commands, including general commands that deliver a better experience for developers.
You can use the `-h/--help` option to view the help information of sub-commands. Similarly, you can also use `-v/--verbose` to view the detailed execution process of commands when the execution of sub commands reports an error.
## obd devmode enable
You can run this command to enable the developer mode, which is a prerequisite for using other tool commands. After you enable the developer mode, OBD will downgrade the level of some exceptions and ignore some parameter exceptions. If you are not a kernel developer, use this command with caution.
This topic describes how to use OceanBase Cloud Platform (OCP) to take over a cluster deployed by OceanBase Deployer (OBD). The cluster named test, which is started by using the distributed-example.yaml configuration file, is used as an example.
## Prerequisites
- The OBD version is V1.3.0 or later.
- The OCP version is V3.1.1-ce or later.
## Modify the OceanBase cluster
### Check whether takeover conditions are met
Before using OCP to take over an OceanBase cluster deployed by OBD, run the following command to check whether takeover conditions are met. If the conditions are not met, modify the cluster based on prompts as follows:
```shell
obd cluster check4ocp <deploy-name>
# Example
obd cluster check4ocp test
```
For information about the `obd cluster check4ocp` command, see [obd cluster check4ocp](3.obd-command/1.cluster-command-groups.md).
### Configure IDC information
The configuration file of default style does not support the configuration of Internet Data Center (IDC) information. You need to use the new feature of OBD V1.3.0 to change the style of the configuration file to the cluster style.
Run the following command for the modification to take effect:
```shell
obd cluster reload <deploy name>
# Example
obd cluster reload test
```
For information about the `obd cluster reload` command, see [obd cluster reload](3.obd-command/1.cluster-command-groups.md).
### Configure the password
To use OCP to take over a cluster, you need to configure the password for the root user to connect to the cluster under the SYS tenant. Run the following command to enter the edit mode and use `root_passwd` to configure the password.
```shell
obd cluster edit-config <deploy name>
# Example
obd cluster edit-config test
```
Sample configuration file:
```yaml
## Only need to configure when remote login is required
# Please don't use hostname, only IP can be supported
ip:xxx.xxx.xxx.xxx
-name:server2
ip:xxx.xxx.xxx.xxx
-name:server3
ip:xxx.xxx.xxx.xxx
global:
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
home_path:/root/observer
# External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started.
mysql_port:2881
# Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started.
rpc_port:2882
# The maximum running memory for an observer. When ignored, autodeploy calculates this value based on the current server available resource.
memory_limit:64G
# The reserved system memory. system_memory is reserved for general tenants. The default value is 30G. Autodeploy calculates this value based on the current server available resource.
system_memory:30G
# Password for root. The default value is empty.
root_password:xxxxxx
# Password for proxyro. proxyro_password must be the same as observer_sys_password. The default value is empty.
# proxyro_password:
server1:
zone:zone1
server2:
zone:zone2
server3:
zone:zone3
```
The preceding shows a sample configuration file of the default style. For a configuration file of the cluster style, see the configuration example in **Configure IDC information**.
Run the following command for the modification to take effect:
```shell
obd cluster reload <deploy name>
# Example
obd cluster reload test
```
### Configure the user
OCP requires the process to be started by the admin user with the passwordless sudo permission. Therefore, you need to prepare an admin user as required. If this condition is already met, go to **Change the user**.
#### Create a user
On a server where OBServer is deployed, you can create the admin user as the root user.
```shell
# Create a user group
groupadd admin
# Create the admin user
useradd admin -g admin
```
Then, configure passwordless SSH logon for the admin user. For information about how to configure passwordless SSH logon, see [Use SSH to log on without a password](https://en.oceanbase.com/docs/community-observer-en-10000000000209361).
> **Note**
>
> 1. You need to configure passwordless SSH logon for the admin user.
>
> 2. A private key needs to be configured here, that is, `id_rsa`.
#### Grant the passwordless sudo permission to the admin user
Perform the following operations as the root user:
Run the following command for the modification to take effect:
```shell
obd cluster restart <deploy name>
# Example
obd cluster restart test--wp
```
For information about the `obd cluster restart` command, see [obd cluster restart](3.obd-command/1.cluster-command-groups.md).
### Multiple OBServers on a single server
OCP requires that one server have only one OBServer installed. At present, the scenario with multiple OBServers running on a single server is not supported. To use OCP to take over a cluster with multiple OBServers running on a single server, you need to keep only one OBServer running and stop other OBServers.
> **Note**
>
> After all the preceding operations are completed, you can run the `obd cluster check4ocp <deploy name>` command again to check whether takeover conditions are met. If not, you can make modifications based on prompts.
## Use OCP to take over the cluster
### Change the password of the proxyro user
Before using OCP to take over the cluster, check whether the password of the proxyro user in the cluster is the default value. If not, change the password of the proxyro user in OCP to that of the proxyro user in the cluster.
You can call an OCP API to change the password.
```bash
curl --user user:pass -X POST "http://ocp-site-url:port/api/v2/obproxy/password"-H"Content-Type:application/json"-d'{"username":"proxyro","password":"*****"}'
```
Note:
-`user:pass` represents the username and password of OCP. The caller must have the admin permissions.
-`password` after `-d` represents the password of the proxyro user in the cluster to be taken over.
This operation produces an O&M task to change the password of the proxyro user in the existing OceanBase cluster in OCP, as well as the corresponding configuration of the OBProxy cluster.
You can proceed with subsequent steps only after the O&M task succeeds. If the task fails, you need to try it again and ensure that it is successful so that you can execute the subsequent steps.
### Use OCP to take over the OceanBase cluster
You can directly take over the OceanBase cluster on the GUI of OCP. For detailed steps, see [Take over a cluster](https://en.oceanbase.com/docs/community-ocp-en-10000000000779629).
After using OCP to take over the OceanBase cluster, you need to create an OBProxy cluster and associate it with the OceanBase cluster that has been taken over. For detailed steps, see [Create an OBProxy cluster](https://en.oceanbase.com/docs/community-ocp-en-10000000000779538).
If original OBProxies use a virtual IP address (VIP), add the OBProxies created in OCP to the VIP one by one, and then delete the original OBProxies from the VIP one by one.
### FAQ
1. Why do I need to change the password of the proxyro user in OCP?
Typically, an OBProxy managed in OCP is started by ConfigServer and can theoretically connect to multiple OceanBase clusters. However, the password of the proxyro user can be changed only globally for OBProxies. This password is a global configuration in OCP. It is used by OBProxies to query metadata, and the change of it does not affect business tenants.
2. When I switch to a new OBProxy, can I reuse the original server?
If multiple OBProxies have been deployed and are accessed through a VIP, you can delete them from the VIP one by one, deploy new OBProxies in OCP by using the original servers, and add the new OBProxies back to the VIP, thereby reusing the servers.
3. Can I choose not to switch to a new OBProxy?
Yes, you can. The original OBProxy can still properly connect to the OceanBase cluster that has been taken over. However, we recommend that you create a new OBProxy in OCP to facilitate subsequent O&M management.
# Add GUI-based monitoring for an existing cluster
OceanBase Deployer (OBD) supports the deployment of Prometheus and Grafana since V1.6.0. This topic describes how to add GUI-based monitoring for a deployed cluster.
This topic describes three scenarios. You can refer to the descriptions based on the actual conditions of your cluster.
> **Note**
>
> The configuration examples in this topic are for reference only. For more information about the detailed configurations, go to the `/usr/obd/example` directory and view the examples of different components.
## Scenario 1: OBAgent is not deployed in the cluster
To add GUI-based monitoring for a cluster in which OBAgent is not deployed, you must create a cluster and deploy OBAgent, Prometheus, and Grafana in the cluster.
OBAgent is separately configured for collecting monitoring information of OceanBase Database. It is declared in the configuration file that Prometheus depends on OBAgent and that Grafana depends on Prometheus.
# Please don't use hostname, only IP can be supported
-192.168.1.2
-192.168.1.3
-192.168.1.4
global:
# The working directory for obagent. obagent is started under this directory. This is a required field.
home_path:/root/obagent
# The port that pulls and manages the metrics. The default port number is 8088.
server_port:8088
# Debug port for pprof. The default port number is 8089.
pprof_port:8089
# Log level. The default value is INFO.
log_level:INFO
# Log path. The default value is log/monagent.log.
log_path:log/monagent.log
# Encryption method. OBD supports aes and plain. The default value is plain.
crypto_method:plain
# Path to store the crypto key. The default value is conf/.config_secret.key.
# crypto_path: conf/.config_secret.key
# Size for a single log file. Log size is measured in Megabytes. The default value is 30M.
log_size:30
# Expiration time for logs. The default value is 7 days.
log_expire_day:7
# The maximum number for log files. The default value is 10.
log_file_count:10
# Whether to use local time for log files. The default value is true.
# log_use_localtime: true
# Whether to enable log compression. The default value is true.
# log_compress: true
# Username for HTTP authentication. The default value is admin.
http_basic_auth_user:******
# Password for HTTP authentication. The default value is root.
http_basic_auth_password:******
# Username for debug service. The default value is admin.
pprof_basic_auth_user:******
# Password for debug service. The default value is root.
pprof_basic_auth_password:******
# Monitor username for OceanBase Database. The user must have read access to OceanBase Database as a system tenant. The default value is root.
monitor_user:root
# Monitor password for OceanBase Database. The default value is empty. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the root_password in oceanbase-ce.
monitor_password:
# The SQL port for observer. The default value is 2881. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the mysql_port in oceanbase-ce.
sql_port:2881
# The RPC port for observer. The default value is 2882. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the rpc_port in oceanbase-ce.
rpc_port:2882
# Cluster name for OceanBase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the appname in oceanbase-ce.
cluster_name:obcluster
# Cluster ID for OceanBase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the cluster_id in oceanbase-ce.
cluster_id:1
# Monitor status for OceanBase Database. Active is to enable. Inactive is to disable. The default value is active. When you deploy an cluster automatically, OBD decides whether to enable this parameter based on depends.
ob_monitor_status:active
# Monitor status for your host. Active is to enable. Inactive is to disable. The default value is active.
host_monitor_status:active
# Whether to disable the basic authentication for HTTP service. True is to disable. False is to enable. The default value is false.
disable_http_basic_auth:false
# Whether to disable the basic authentication for the debug interface. True is to disable. False is to enable. The default value is false.
disable_pprof_basic_auth:false
# Synchronize the obagent-related information to the specified path of the remote host, as the targets specified by `file_sd_config` in the Prometheus configuration.
# For prometheus that depends on obagent, it can be specified to $home_path/targets of prometheus.
# For independently deployed prometheus, specify the files to be collected by setting `config` -> `scrape_configs` -> `file_sd_configs` -> `files`. For details, please refer to prometheus-only-example.yaml.
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
zone_name:zone1
192.168.1.3:
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
zone_name:zone2
192.168.1.4:
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
zone_name:zone3
prometheus:
depends:
-obagent
servers:
-192.168.1.5
global:
home_path:/root/prometheus
grafana:
depends:
-prometheus
servers:
-192.168.1.5
global:
home_path:/root/grafana
login_password:oceanbase
```
After you modify the configuration file, run the following command to deploy and start a new cluster:
After the cluster is started, go to the Grafana page as prompted. Then, you can view the monitoring information of the existing cluster.
## Scenario 2: OBAgent is deployed in the cluster
To add GUI-based monitoring for a cluster in which OBAgent is deployed, you must create a cluster and deploy Prometheus and Grafana in the cluster.
In this scenario, it cannot be declared that Prometheus depends on OBAgent. Therefore, you must manually associate them. Open the `conf/prometheus_config/prometheus.yaml` file in the installation directory of OBAgent in the existing cluster, and copy the corresponding configuration to the `conifg` parameter in the `global` section of the Prometheus settings. Sample configuration file:
# The working directory for prometheus. prometheus is started under this directory. This is a required field.
home_path:/root/prometheus
config:# Configuration of the Prometheus service. The format is consistent with the Prometheus config file. Corresponds to the `config.file` parameter.
global:
scrape_interval:1s
evaluation_interval:10s
rule_files:
-"rules/*rules.yaml"
scrape_configs:
-job_name:prometheus
metrics_path:/metrics
scheme:http
static_configs:
-targets:
-'localhost:9090'
-job_name:node
basic_auth:
username:******
password:******
metrics_path:/metrics/node/host
scheme:http
static_configs:
-targets:
-192.168.1.2:8088
-job_name:ob_basic
basic_auth:
username:******
password:******
metrics_path:/metrics/ob/basic
scheme:http
static_configs:
-targets:
-192.168.1.2:8088
-job_name:ob_extra
basic_auth:
username:******
password:******
metrics_path:/metrics/ob/extra
scheme:http
static_configs:
-targets:
-192.168.1.2:8088
-job_name:agent
basic_auth:
username:******
password:******
metrics_path:/metrics/stat
scheme:http
static_configs:
-targets:
-192.168.1.2:8088
grafana:
servers:
-192.168.1.5
depends:
-prometheus
global:
home_path:/root/grafana
login_password:oceanbase# Grafana login password. The default value is 'oceanbase'.
```
> **Note**
>
> In the preceding sample configuration file, the username and password of `basic_auth` must be the same as those of `http_basic_auth_xxx` in the configuration file of OBAgent.
After you modify the configuration file, run the following command to deploy a new cluster:
After the deployment is completed, copy the `conf/prometheus_config/rules` directory in the installation directory of OBAgent to the installation directory of Prometheus.
Run the following command to start the new cluster:
```bash
obd cluster start <new deploy name>
```
After the cluster is started, go to the Grafana page as prompted. Then, you can view the monitoring information of the existing cluster.
> **Notice**
>
> 1. In the monitoring metrics of Prometheus in `scrape_configs`, `localhost:9090` must be modified based on the current listening address of Prometheus. If authentication is enabled for Prometheus, `basic_auth` must be specified. Here the listening address is the address of the server where Prometheus is deployed, namely, the address and port in the Prometheus configurations.
>
> 2. If the OBAgent node of the existing cluster changes, you must run the `obd cluster edit-config <new deploy name>` command to synchronize the content in the `conf/prometheus_config/prometheus.yaml` installation directory of OBAgent.
To enable Prometheus to collect the monitoring information of multiple clusters or dynamically synchronize OBAgent changes, you can make a few changes on the basis of scenario 2.
Specifically, replace `static_configs` in Prometheus configurations with `file_sd_config` to obtain and synchronize the information about the OBAgent node. In the following example, all `.yaml` files in the `targets` directory of the installation directory (`home_path`) of Prometheus are collected.
> **Note**
>
> The `targets` directory will be created in the installation directory of Prometheus only if related parameters are configured for OBAgent in the configuration file of the existing cluster. For more information, see [Modify the configurations of a monitored cluster](#Modify%20the%20configurations%20of%20a%20monitored%20cluster).
# The working directory for prometheus. prometheus is started under this directory. This is a required field.
home_path:/root/prometheus
config:# Configuration of the Prometheus service. The format is consistent with the Prometheus config file. Corresponds to the `config.file` parameter.
global:
scrape_interval:1s
evaluation_interval:10s
rule_files:
-"rules/*rules.yaml"
scrape_configs:
-job_name:prometheus
metrics_path:/metrics
scheme:http
static_configs:
-targets:
-'localhost:9090'
-job_name:node
basic_auth:
username:******
password:******
metrics_path:/metrics/node/host
scheme:http
file_sd_configs:
-files:
-targets/*.yaml
-job_name:ob_basic
basic_auth:
username:******
password:******
metrics_path:/metrics/ob/basic
scheme:http
file_sd_configs:
-files:
-targets/*.yaml
-job_name:ob_extra
basic_auth:
username:******
password:******
metrics_path:/metrics/ob/extra
scheme:http
file_sd_configs:
-files:
-targets/*.yaml
-job_name:agent
basic_auth:
username:******
password:******
metrics_path:/metrics/stat
scheme:http
file_sd_configs:
-files:
-targets/*.yaml
grafana:
servers:
-192.168.1.5
depends:
-prometheus
global:
home_path:/root/grafana
login_password:oceanbase# Grafana login password. The default value is 'oceanbase'.
```
> **Note**
>
> In the preceding sample configuration file, the username and password of `basic_auth` must be the same as those of `http_basic_auth_xxx` in the configuration file of OBAgent.
After you modify the configuration file, run the following command to deploy a new cluster:
After the deployment is completed, copy the `conf/prometheus_config/rules` directory in the installation directory of OBAgent to the installation directory of Prometheus.
Run the following command to start the new cluster:
```bash
obd cluster start <new deploy name>
```
After you deploy the new cluster, go to the Grafana page as prompted. At this time, you cannot view the monitoring information of monitored clusters. You must modify the OBAgent configurations of the monitored clusters.
### Modify the configurations of a monitored cluster
To create the `targets` directory in the installation directory of Prometheus, you must run the `obd cluster edit-config <deploy name>` command to modify the configuration file. Specifically, you must add the `target_sync_configs` parameter to the configuration file to point to the `targets` directory in the installation directory of Prometheus. By default, the user settings of the current cluster are used. If the user settings on the server where Prometheus is installed are inconsistent with the user settings in the configuration file of the current cluster, perform configuration based on the example.
```yaml
obagent:
servers:
# Please don't use hostname, only IP can be supported
After you modify the configuration file, restart the cluster as prompted. Then, go to the Grafana page and view the monitoring information of the existing cluster.
> **Notice**
>
> 1. In the monitoring metrics of Prometheus in `scrape_configs`, `localhost:9090` must be modified based on the current listening address of Prometheus. If authentication is enabled for Prometheus, `basic_auth` must be specified. Here the listening address is the address of the server where Prometheus is deployed, namely, the address and port in the Prometheus configurations.
>
> 2. The HTTP usernames and passwords that are collected by Prometheus must be consistent for all OBAgents. For any inconsistency, split the collection metrics.
oceanbase-ce: # The name of the component that is configured as follows.
# version: 3.1.3 # Specify the version of the component, which is usually not required.
# pacakge_hash: 589c4f8ed2662835148a95d5c1b46a07e36c2d346804791364a757aef4f7b60d # Specify the hash of the component, which is usually not required.
# package_hash: 589c4f8ed2662835148a95d5c1b46a07e36c2d346804791364a757aef4f7b60d # Specify the hash of the component, which is usually not required.
# tag: dev # Specify the tag of the component, which is usually not required.
servers: # The list of nodes.
- name: z1 # The node name, which can be left blank. The default node name is the same as the IP address if this name is left blank. The node name is z1 in this example.
...
...
@@ -62,7 +62,7 @@ oceanbase-ce: # The name of the component that is configured as follows.
zone: zone3
obproxy-ce: # The name of the component that is configured as follows.
# version: 3.2.3 # Specify the version of the component, which is usually not required.
# pacakge_hash: 73cccf4d05508de0950ad1164aec03003c4ddbe1415530e031ac8b6469815fea # Specify the hash of the component, which is usually not required.
# package_hash: 73cccf4d05508de0950ad1164aec03003c4ddbe1415530e031ac8b6469815fea # Specify the hash of the component, which is usually not required.
# tag: dev # Specify the tag of the component, which is usually not required.
# Please don't use hostname, only IP can be supported
ip:xxx.xxx.xxx.xxx
-name:server2
ip:xxx.xxx.xxx.xxx
-name:server3
ip:xxx.xxx.xxx.xxx
global:
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
home_path:/root/observer
# External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started.
mysql_port:2881
# Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started.
rpc_port:2882
# The maximum running memory for an observer. When ignored, autodeploy calculates this value based on the current server available resource.
memory_limit:64G
# The reserved system memory. system_memory is reserved for general tenants. The default value is 30G. Autodeploy calculates this value based on the current server available resource.
system_memory:30G
# Password for root. The default value is empty.
root_password:********
# Password for proxyro. proxyro_password must be the same as observer_sys_password. The default value is empty.
# Please don't use hostname, only IP can be supported
-192.168.1.2
-192.168.1.3
-192.168.1.4
global:
# The working directory for obagent. obagent is started under this directory. This is a required field.
home_path:/root/obagent
# The port that pulls and manages the metrics. The default port number is 8088.
server_port:8088
# Debug port for pprof. The default port number is 8089.
pprof_port:8089
# Log level. The default value is INFO.
log_level:INFO
# Log path. The default value is log/monagent.log.
log_path:log/monagent.log
# Encryption method. OBD supports aes and plain. The default value is plain.
crypto_method:plain
# Path to store the crypto key. The default value is conf/.config_secret.key.
# crypto_path: conf/.config_secret.key
# Size for a single log file. Log size is measured in Megabytes. The default value is 30M.
log_size:30
# Expiration time for logs. The default value is 7 days.
log_expire_day:7
# The maximum number for log files. The default value is 10.
log_file_count:10
# Whether to use local time for log files. The default value is true.
# log_use_localtime: true
# Whether to enable log compression. The default value is true.
# log_compress: true
# Username for HTTP authentication. The default value is admin.
http_basic_auth_user:******
# Password for HTTP authentication. The default value is root.
http_basic_auth_password:******
# Username for debug service. The default value is admin.
pprof_basic_auth_user:******
# Password for debug service. The default value is root.
pprof_basic_auth_password:******
# Monitor username for OceanBase Database. The user must have read access to OceanBase Database as a system tenant. The default value is root.
monitor_user:root
# Monitor password for OceanBase Database. The default value is empty. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the root_password in oceanbase-ce.
monitor_password:
# The SQL port for observer. The default value is 2881. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the mysql_port in oceanbase-ce.
sql_port:2881
# The RPC port for observer. The default value is 2882. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the rpc_port in oceanbase-ce.
rpc_port:2882
# Cluster name for OceanBase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the appname in oceanbase-ce.
cluster_name:obcluster
# Cluster ID for OceanBase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the cluster_id in oceanbase-ce.
cluster_id:1
# Monitor status for OceanBase Database. Active is to enable. Inactive is to disable. The default value is active. When you deploy an cluster automatically, OBD decides whether to enable this parameter based on depends.
ob_monitor_status:active
# Monitor status for your host. Active is to enable. Inactive is to disable. The default value is active.
host_monitor_status:active
# Whether to disable the basic authentication for HTTP service. True is to disable. False is to enable. The default value is false.
disable_http_basic_auth:false
# Whether to disable the basic authentication for the debug interface. True is to disable. False is to enable. The default value is false.
disable_pprof_basic_auth:false
# Synchronize the obagent-related information to the specified path of the remote host, as the targets specified by `file_sd_config` in the Prometheus configuration.
# For prometheus that depends on obagent, it can be specified to $home_path/targets of prometheus.
# For independently deployed prometheus, specify the files to be collected by setting `config` -> `scrape_configs` -> `file_sd_configs` -> `files`.For details, please refer to prometheus-only-example.yaml.
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
zone_name:zone1
192.168.1.3:
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
zone_name:zone2
192.168.1.4:
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
# The working directory for prometheus. prometheus is started under this directory. This is a required field.
home_path:/root/prometheus
config:# Configuration of the Prometheus service. The format is consistent with the Prometheus config file. Corresponds to the `config.file` parameter.
global:
scrape_interval:1s
evaluation_interval:10s
rule_files:
-"rules/*rules.yaml"
scrape_configs:
-job_name:prometheus
metrics_path:/metrics
scheme:http
static_configs:
-targets:
-'localhost:9090'
-job_name:node
basic_auth:
username:******
password:******
metrics_path:/metrics/node/host
scheme:http
static_configs:
-targets:
-192.168.1.2:8088
-job_name:ob_basic
basic_auth:
username:******
password:******
metrics_path:/metrics/ob/basic
scheme:http
static_configs:
-targets:
-192.168.1.2:8088
-job_name:ob_extra
basic_auth:
username:******
password:******
metrics_path:/metrics/ob/extra
scheme:http
static_configs:
-targets:
-192.168.1.2:8088
-job_name:agent
basic_auth:
username:******
password:******
metrics_path:/metrics/stat
scheme:http
static_configs:
-targets:
-192.168.1.2:8088
grafana:
servers:
-192.168.1.5
depends:
-prometheus
global:
home_path:/root/grafana
login_password:*********# Grafana login password. The default value is 'oceanbase'.
# The working directory for prometheus. prometheus is started under this directory. This is a required field.
home_path:/root/prometheus
config:# Configuration of the Prometheus service. The format is consistent with the Prometheus config file. Corresponds to the `config.file` parameter.
global:
scrape_interval:1s
evaluation_interval:10s
rule_files:
-"rules/*rules.yaml"
scrape_configs:
-job_name:prometheus
metrics_path:/metrics
scheme:http
static_configs:
-targets:
-'localhost:9090'
-job_name:node
basic_auth:
username:******
password:******
metrics_path:/metrics/node/host
scheme:http
file_sd_configs:
-files:
-targets/*.yaml
-job_name:ob_basic
basic_auth:
username:******
password:******
metrics_path:/metrics/ob/basic
scheme:http
file_sd_configs:
-files:
-targets/*.yaml
-job_name:ob_extra
basic_auth:
username:******
password:******
metrics_path:/metrics/ob/extra
scheme:http
file_sd_configs:
-files:
-targets/*.yaml
-job_name:agent
basic_auth:
username:******
password:******
metrics_path:/metrics/stat
scheme:http
file_sd_configs:
-files:
-targets/*.yaml
grafana:
servers:
-192.168.1.5
depends:
-prometheus
global:
home_path:/root/grafana
login_password:*********# Grafana login password. The default value is 'oceanbase'.