@@ -10,6 +10,8 @@ After you deploy OceanBase Deployer (OBD), you can run the `obd demo` command to
- At least 54 GB of disk space is available on the server.
- Your server can be connected to the network, or there are installation packages required for deployment.
> **Note**
>
> If the foregoing prerequisites are not met, see [Use OBD to start an OceanBase cluster](../3.user-guide/2.start-the-oceanbase-cluster-by-using-obd.md).
For more information about the relevant configuration items in the configuration file, refer to [Configuration file description](../../4.configuration-file-description.md).
> **Notice**
>
> This command supports only level-1 configurations under global that are specified by using options.
OBD provides multiple-level commands. You can use the`-h/--help` option to view the help information of sub-commands.
OBD provides multiple-level commands. You can use the`-h/--help` option to view the help information of sub-commands. Similarly, you can also use `-v/--verbose` to view the detailed execution process of commands when the execution of sub commands reports an error.
## `obd mirror clone`
## obd mirror clone
Copy an RPM package to the local mirror repository. You can run the corresponding OBD cluster command to start the mirror.
...
...
@@ -14,7 +14,7 @@ obd mirror clone <path> [-f]
The `-f` option is `--force`. `-f` is optional. This option is disabled by default. If it is enabled and a mirror of the same name exists in the repository, the copied mirror will forcibly overwrite the existing one.
## `obd mirror create`
## obd mirror create
Creates a mirror based on the local directory. When OBD starts a user-compiled open-source OceanBase software, you can run this command to add the compilation output to the local repository. Then, you can run the corresponding `obd cluster` command to start the mirror.
...
...
@@ -22,19 +22,19 @@ Creates a mirror based on the local directory. When OBD starts a user-compiled o
For example, you can [compile an OceanBase cluster based on the source code](https://www.oceanbase.com/en/docs/community-observer-en-10000000000209369). Then, you can run the `make DESTDIR=./ install && obd mirror create -n oceanbase-ce -V 3.1.0 -p ./usr/local` command to add the compilation output to the local repository of OBD.
For example, you can [compile an OceanBase cluster based on the source code](https://en.oceanbase.com/docs/community-observer-en-10000000000209369). Then, you can run the `make DESTDIR=./ install && obd mirror create -n oceanbase-ce -V 3.1.0 -p ./usr/local` command to add the compilation output to the local repository of OBD.
This table describes the corresponding options.
| Option | Required | Data type | Description |
--- | --- | --- |---
| -n/--name | Yes | string | The component name. If you want to compile an OceanBase cluster, set this option to oceanbase-ce. If you want to compile ODP, set this option to obproxy. |
|----|-----|-----|----|
| -n/--name | Yes | string | The component name. If you want to compile an OceanBase cluster, set this option to oceanbase-ce. If you want to compile ODP, set this option to obproxy-ce. |
| -p/--path | Yes | string | The directory that stores the compilation output. OBD will automatically retrieve files required by the component from this directory. |
| -t/--tag | No | string | The mirror tags. You can define one or more tags for the created mirror. Separate multiple tags with commas (,). |
| -f/--force | No | bool | Specifies whether to forcibly overwrite an existing mirror or tag. This option is disabled by default. |
## `obd mirror list`
## obd mirror list
Shows the mirror repository or mirror list.
...
...
@@ -44,7 +44,7 @@ obd mirror list [mirror repo name]
`mirror repo name` specifies the mirror repository name. This parameter is optional. When it is not specified, all mirror repositories will be returned. When it is specified, only the specified mirror repository will be returned.
## `obd mirror update`
## obd mirror update
Synchronizes the information of all remote mirror repositories.
...
...
@@ -52,7 +52,7 @@ Synchronizes the information of all remote mirror repositories.
obd mirror update
```
## `obd mirror disable`
## obd mirror disable
Disable remote mirror repositories. To disable all the remote mirror repositories, run the `obd mirror disable remote` command.
OceanBase Deployer (OBD) provides a series of tool commands, including general commands that deliver a better experience for developers.
You can use the `-h/--help` option to view the help information of sub-commands. Similarly, you can also use `-v/--verbose` to view the detailed execution process of commands when the execution of sub commands reports an error.
## obd devmode enable
You can run this command to enable the developer mode, which is a prerequisite for using other tool commands. After you enable the developer mode, OBD will downgrade the level of some exceptions and ignore some parameter exceptions. If you are not a kernel developer, use this command with caution.
This topic describes how to use OceanBase Cloud Platform (OCP) to take over a cluster deployed by OceanBase Deployer (OBD). The cluster named test, which is started by using the distributed-example.yaml configuration file, is used as an example.
## Prerequisites
- The OBD version is V1.3.0 or later.
- The OCP version is V3.1.1-ce or later.
## Modify the OceanBase cluster
### Check whether takeover conditions are met
Before using OCP to take over an OceanBase cluster deployed by OBD, run the following command to check whether takeover conditions are met. If the conditions are not met, modify the cluster based on prompts as follows:
```shell
obd cluster check4ocp <deploy-name>
# Example
obd cluster check4ocp test
```
For information about the `obd cluster check4ocp` command, see [obd cluster check4ocp](3.obd-command/1.cluster-command-groups.md).
### Configure IDC information
The configuration file of default style does not support the configuration of Internet Data Center (IDC) information. You need to use the new feature of OBD V1.3.0 to change the style of the configuration file to the cluster style.
Run the following command for the modification to take effect:
```shell
obd cluster reload <deploy name>
# Example
obd cluster reload test
```
For information about the `obd cluster reload` command, see [obd cluster reload](3.obd-command/1.cluster-command-groups.md).
### Configure the password
To use OCP to take over a cluster, you need to configure the password for the root user to connect to the cluster under the SYS tenant. Run the following command to enter the edit mode and use `root_passwd` to configure the password.
```shell
obd cluster edit-config <deploy name>
# Example
obd cluster edit-config test
```
Sample configuration file:
```yaml
## Only need to configure when remote login is required
# Please don't use hostname, only IP can be supported
ip:xxx.xxx.xxx.xxx
-name:server2
ip:xxx.xxx.xxx.xxx
-name:server3
ip:xxx.xxx.xxx.xxx
global:
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
home_path:/root/observer
# External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started.
mysql_port:2881
# Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started.
rpc_port:2882
# The maximum running memory for an observer. When ignored, autodeploy calculates this value based on the current server available resource.
memory_limit:64G
# The reserved system memory. system_memory is reserved for general tenants. The default value is 30G. Autodeploy calculates this value based on the current server available resource.
system_memory:30G
# Password for root. The default value is empty.
root_password:xxxxxx
# Password for proxyro. proxyro_password must be the same as observer_sys_password. The default value is empty.
# proxyro_password:
server1:
zone:zone1
server2:
zone:zone2
server3:
zone:zone3
```
The preceding shows a sample configuration file of the default style. For a configuration file of the cluster style, see the configuration example in **Configure IDC information**.
Run the following command for the modification to take effect:
```shell
obd cluster reload <deploy name>
# Example
obd cluster reload test
```
### Configure the user
OCP requires the process to be started by the admin user with the passwordless sudo permission. Therefore, you need to prepare an admin user as required. If this condition is already met, go to **Change the user**.
#### Create a user
On a server where OBServer is deployed, you can create the admin user as the root user.
```shell
# Create a user group
groupadd admin
# Create the admin user
useradd admin -g admin
```
Then, configure passwordless SSH logon for the admin user. For information about how to configure passwordless SSH logon, see [Use SSH to log on without a password](https://en.oceanbase.com/docs/community-observer-en-10000000000209361).
> **Note**
>
> 1. You need to configure passwordless SSH logon for the admin user.
>
> 2. A private key needs to be configured here, that is, `id_rsa`.
#### Grant the passwordless sudo permission to the admin user
Perform the following operations as the root user:
Run the following command for the modification to take effect:
```shell
obd cluster restart <deploy name>
# Example
obd cluster restart test--wp
```
For information about the `obd cluster restart` command, see [obd cluster restart](3.obd-command/1.cluster-command-groups.md).
### Multiple OBServers on a single server
OCP requires that one server have only one OBServer installed. At present, the scenario with multiple OBServers running on a single server is not supported. To use OCP to take over a cluster with multiple OBServers running on a single server, you need to keep only one OBServer running and stop other OBServers.
> **Note**
>
> After all the preceding operations are completed, you can run the `obd cluster check4ocp <deploy name>` command again to check whether takeover conditions are met. If not, you can make modifications based on prompts.
## Use OCP to take over the cluster
### Change the password of the proxyro user
Before using OCP to take over the cluster, check whether the password of the proxyro user in the cluster is the default value. If not, change the password of the proxyro user in OCP to that of the proxyro user in the cluster.
You can call an OCP API to change the password.
```bash
curl --user user:pass -X POST "http://ocp-site-url:port/api/v2/obproxy/password"-H"Content-Type:application/json"-d'{"username":"proxyro","password":"*****"}'
```
Note:
-`user:pass` represents the username and password of OCP. The caller must have the admin permissions.
-`password` after `-d` represents the password of the proxyro user in the cluster to be taken over.
This operation produces an O&M task to change the password of the proxyro user in the existing OceanBase cluster in OCP, as well as the corresponding configuration of the OBProxy cluster.
You can proceed with subsequent steps only after the O&M task succeeds. If the task fails, you need to try it again and ensure that it is successful so that you can execute the subsequent steps.
### Use OCP to take over the OceanBase cluster
You can directly take over the OceanBase cluster on the GUI of OCP. For detailed steps, see [Take over a cluster](https://en.oceanbase.com/docs/community-ocp-en-10000000000779629).
After using OCP to take over the OceanBase cluster, you need to create an OBProxy cluster and associate it with the OceanBase cluster that has been taken over. For detailed steps, see [Create an OBProxy cluster](https://en.oceanbase.com/docs/community-ocp-en-10000000000779538).
If original OBProxies use a virtual IP address (VIP), add the OBProxies created in OCP to the VIP one by one, and then delete the original OBProxies from the VIP one by one.
### FAQ
1. Why do I need to change the password of the proxyro user in OCP?
Typically, an OBProxy managed in OCP is started by ConfigServer and can theoretically connect to multiple OceanBase clusters. However, the password of the proxyro user can be changed only globally for OBProxies. This password is a global configuration in OCP. It is used by OBProxies to query metadata, and the change of it does not affect business tenants.
2. When I switch to a new OBProxy, can I reuse the original server?
If multiple OBProxies have been deployed and are accessed through a VIP, you can delete them from the VIP one by one, deploy new OBProxies in OCP by using the original servers, and add the new OBProxies back to the VIP, thereby reusing the servers.
3. Can I choose not to switch to a new OBProxy?
Yes, you can. The original OBProxy can still properly connect to the OceanBase cluster that has been taken over. However, we recommend that you create a new OBProxy in OCP to facilitate subsequent O&M management.
# Add GUI-based monitoring for an existing cluster
OceanBase Deployer (OBD) supports the deployment of Prometheus and Grafana since V1.6.0. This topic describes how to add GUI-based monitoring for a deployed cluster.
This topic describes three scenarios. You can refer to the descriptions based on the actual conditions of your cluster.
> **Note**
>
> The configuration examples in this topic are for reference only. For more information about the detailed configurations, go to the `/usr/obd/example` directory and view the examples of different components.
## Scenario 1: OBAgent is not deployed in the cluster
To add GUI-based monitoring for a cluster in which OBAgent is not deployed, you must create a cluster and deploy OBAgent, Prometheus, and Grafana in the cluster.
OBAgent is separately configured for collecting monitoring information of OceanBase Database. It is declared in the configuration file that Prometheus depends on OBAgent and that Grafana depends on Prometheus.
# Please don't use hostname, only IP can be supported
-192.168.1.2
-192.168.1.3
-192.168.1.4
global:
# The working directory for obagent. obagent is started under this directory. This is a required field.
home_path:/root/obagent
# The port that pulls and manages the metrics. The default port number is 8088.
server_port:8088
# Debug port for pprof. The default port number is 8089.
pprof_port:8089
# Log level. The default value is INFO.
log_level:INFO
# Log path. The default value is log/monagent.log.
log_path:log/monagent.log
# Encryption method. OBD supports aes and plain. The default value is plain.
crypto_method:plain
# Path to store the crypto key. The default value is conf/.config_secret.key.
# crypto_path: conf/.config_secret.key
# Size for a single log file. Log size is measured in Megabytes. The default value is 30M.
log_size:30
# Expiration time for logs. The default value is 7 days.
log_expire_day:7
# The maximum number for log files. The default value is 10.
log_file_count:10
# Whether to use local time for log files. The default value is true.
# log_use_localtime: true
# Whether to enable log compression. The default value is true.
# log_compress: true
# Username for HTTP authentication. The default value is admin.
http_basic_auth_user:******
# Password for HTTP authentication. The default value is root.
http_basic_auth_password:******
# Username for debug service. The default value is admin.
pprof_basic_auth_user:******
# Password for debug service. The default value is root.
pprof_basic_auth_password:******
# Monitor username for OceanBase Database. The user must have read access to OceanBase Database as a system tenant. The default value is root.
monitor_user:root
# Monitor password for OceanBase Database. The default value is empty. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the root_password in oceanbase-ce.
monitor_password:
# The SQL port for observer. The default value is 2881. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the mysql_port in oceanbase-ce.
sql_port:2881
# The RPC port for observer. The default value is 2882. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the rpc_port in oceanbase-ce.
rpc_port:2882
# Cluster name for OceanBase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the appname in oceanbase-ce.
cluster_name:obcluster
# Cluster ID for OceanBase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the cluster_id in oceanbase-ce.
cluster_id:1
# Monitor status for OceanBase Database. Active is to enable. Inactive is to disable. The default value is active. When you deploy an cluster automatically, OBD decides whether to enable this parameter based on depends.
ob_monitor_status:active
# Monitor status for your host. Active is to enable. Inactive is to disable. The default value is active.
host_monitor_status:active
# Whether to disable the basic authentication for HTTP service. True is to disable. False is to enable. The default value is false.
disable_http_basic_auth:false
# Whether to disable the basic authentication for the debug interface. True is to disable. False is to enable. The default value is false.
disable_pprof_basic_auth:false
# Synchronize the obagent-related information to the specified path of the remote host, as the targets specified by `file_sd_config` in the Prometheus configuration.
# For prometheus that depends on obagent, it can be specified to $home_path/targets of prometheus.
# For independently deployed prometheus, specify the files to be collected by setting `config` -> `scrape_configs` -> `file_sd_configs` -> `files`. For details, please refer to prometheus-only-example.yaml.
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
zone_name:zone1
192.168.1.3:
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
zone_name:zone2
192.168.1.4:
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
zone_name:zone3
prometheus:
depends:
-obagent
servers:
-192.168.1.5
global:
home_path:/root/prometheus
grafana:
depends:
-prometheus
servers:
-192.168.1.5
global:
home_path:/root/grafana
login_password:oceanbase
```
After you modify the configuration file, run the following command to deploy and start a new cluster:
After the cluster is started, go to the Grafana page as prompted. Then, you can view the monitoring information of the existing cluster.
## Scenario 2: OBAgent is deployed in the cluster
To add GUI-based monitoring for a cluster in which OBAgent is deployed, you must create a cluster and deploy Prometheus and Grafana in the cluster.
In this scenario, it cannot be declared that Prometheus depends on OBAgent. Therefore, you must manually associate them. Open the `conf/prometheus_config/prometheus.yaml` file in the installation directory of OBAgent in the existing cluster, and copy the corresponding configuration to the `conifg` parameter in the `global` section of the Prometheus settings. Sample configuration file:
# The working directory for prometheus. prometheus is started under this directory. This is a required field.
home_path:/root/prometheus
config:# Configuration of the Prometheus service. The format is consistent with the Prometheus config file. Corresponds to the `config.file` parameter.
global:
scrape_interval:1s
evaluation_interval:10s
rule_files:
-"rules/*rules.yaml"
scrape_configs:
-job_name:prometheus
metrics_path:/metrics
scheme:http
static_configs:
-targets:
-'localhost:9090'
-job_name:node
basic_auth:
username:******
password:******
metrics_path:/metrics/node/host
scheme:http
static_configs:
-targets:
-192.168.1.2:8088
-job_name:ob_basic
basic_auth:
username:******
password:******
metrics_path:/metrics/ob/basic
scheme:http
static_configs:
-targets:
-192.168.1.2:8088
-job_name:ob_extra
basic_auth:
username:******
password:******
metrics_path:/metrics/ob/extra
scheme:http
static_configs:
-targets:
-192.168.1.2:8088
-job_name:agent
basic_auth:
username:******
password:******
metrics_path:/metrics/stat
scheme:http
static_configs:
-targets:
-192.168.1.2:8088
grafana:
servers:
-192.168.1.5
depends:
-prometheus
global:
home_path:/root/grafana
login_password:oceanbase# Grafana login password. The default value is 'oceanbase'.
```
> **Note**
>
> In the preceding sample configuration file, the username and password of `basic_auth` must be the same as those of `http_basic_auth_xxx` in the configuration file of OBAgent.
After you modify the configuration file, run the following command to deploy a new cluster:
After the deployment is completed, copy the `conf/prometheus_config/rules` directory in the installation directory of OBAgent to the installation directory of Prometheus.
Run the following command to start the new cluster:
```bash
obd cluster start <new deploy name>
```
After the cluster is started, go to the Grafana page as prompted. Then, you can view the monitoring information of the existing cluster.
> **Notice**
>
> 1. In the monitoring metrics of Prometheus in `scrape_configs`, `localhost:9090` must be modified based on the current listening address of Prometheus. If authentication is enabled for Prometheus, `basic_auth` must be specified. Here the listening address is the address of the server where Prometheus is deployed, namely, the address and port in the Prometheus configurations.
>
> 2. If the OBAgent node of the existing cluster changes, you must run the `obd cluster edit-config <new deploy name>` command to synchronize the content in the `conf/prometheus_config/prometheus.yaml` installation directory of OBAgent.
To enable Prometheus to collect the monitoring information of multiple clusters or dynamically synchronize OBAgent changes, you can make a few changes on the basis of scenario 2.
Specifically, replace `static_configs` in Prometheus configurations with `file_sd_config` to obtain and synchronize the information about the OBAgent node. In the following example, all `.yaml` files in the `targets` directory of the installation directory (`home_path`) of Prometheus are collected.
> **Note**
>
> The `targets` directory will be created in the installation directory of Prometheus only if related parameters are configured for OBAgent in the configuration file of the existing cluster. For more information, see [Modify the configurations of a monitored cluster](#Modify%20the%20configurations%20of%20a%20monitored%20cluster).
# The working directory for prometheus. prometheus is started under this directory. This is a required field.
home_path:/root/prometheus
config:# Configuration of the Prometheus service. The format is consistent with the Prometheus config file. Corresponds to the `config.file` parameter.
global:
scrape_interval:1s
evaluation_interval:10s
rule_files:
-"rules/*rules.yaml"
scrape_configs:
-job_name:prometheus
metrics_path:/metrics
scheme:http
static_configs:
-targets:
-'localhost:9090'
-job_name:node
basic_auth:
username:******
password:******
metrics_path:/metrics/node/host
scheme:http
file_sd_configs:
-files:
-targets/*.yaml
-job_name:ob_basic
basic_auth:
username:******
password:******
metrics_path:/metrics/ob/basic
scheme:http
file_sd_configs:
-files:
-targets/*.yaml
-job_name:ob_extra
basic_auth:
username:******
password:******
metrics_path:/metrics/ob/extra
scheme:http
file_sd_configs:
-files:
-targets/*.yaml
-job_name:agent
basic_auth:
username:******
password:******
metrics_path:/metrics/stat
scheme:http
file_sd_configs:
-files:
-targets/*.yaml
grafana:
servers:
-192.168.1.5
depends:
-prometheus
global:
home_path:/root/grafana
login_password:oceanbase# Grafana login password. The default value is 'oceanbase'.
```
> **Note**
>
> In the preceding sample configuration file, the username and password of `basic_auth` must be the same as those of `http_basic_auth_xxx` in the configuration file of OBAgent.
After you modify the configuration file, run the following command to deploy a new cluster:
After the deployment is completed, copy the `conf/prometheus_config/rules` directory in the installation directory of OBAgent to the installation directory of Prometheus.
Run the following command to start the new cluster:
```bash
obd cluster start <new deploy name>
```
After you deploy the new cluster, go to the Grafana page as prompted. At this time, you cannot view the monitoring information of monitored clusters. You must modify the OBAgent configurations of the monitored clusters.
### Modify the configurations of a monitored cluster
To create the `targets` directory in the installation directory of Prometheus, you must run the `obd cluster edit-config <deploy name>` command to modify the configuration file. Specifically, you must add the `target_sync_configs` parameter to the configuration file to point to the `targets` directory in the installation directory of Prometheus. By default, the user settings of the current cluster are used. If the user settings on the server where Prometheus is installed are inconsistent with the user settings in the configuration file of the current cluster, perform configuration based on the example.
```yaml
obagent:
servers:
# Please don't use hostname, only IP can be supported
After you modify the configuration file, restart the cluster as prompted. Then, go to the Grafana page and view the monitoring information of the existing cluster.
> **Notice**
>
> 1. In the monitoring metrics of Prometheus in `scrape_configs`, `localhost:9090` must be modified based on the current listening address of Prometheus. If authentication is enabled for Prometheus, `basic_auth` must be specified. Here the listening address is the address of the server where Prometheus is deployed, namely, the address and port in the Prometheus configurations.
>
> 2. The HTTP usernames and passwords that are collected by Prometheus must be consistent for all OBAgents. For any inconsistency, split the collection metrics.
oceanbase-ce: # The name of the component that is configured as follows.
# version: 3.1.3 # Specify the version of the component, which is usually not required.
# pacakge_hash: 589c4f8ed2662835148a95d5c1b46a07e36c2d346804791364a757aef4f7b60d # Specify the hash of the component, which is usually not required.
# package_hash: 589c4f8ed2662835148a95d5c1b46a07e36c2d346804791364a757aef4f7b60d # Specify the hash of the component, which is usually not required.
# tag: dev # Specify the tag of the component, which is usually not required.
servers: # The list of nodes.
- name: z1 # The node name, which can be left blank. The default node name is the same as the IP address if this name is left blank. The node name is z1 in this example.
...
...
@@ -24,7 +24,7 @@ oceanbase-ce: # The name of the component that is configured as follows.
ip: 192.168.1.4
global: # The global configuration. The identical configuration in the same component can be written here.
# The node configuration is used if it has the same configuration item as the global configuration.
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: eth0
...
...
@@ -62,7 +62,7 @@ oceanbase-ce: # The name of the component that is configured as follows.
zone: zone3
obproxy-ce: # The name of the component that is configured as follows.
# version: 3.2.3 # Specify the version of the component, which is usually not required.
# pacakge_hash: 73cccf4d05508de0950ad1164aec03003c4ddbe1415530e031ac8b6469815fea # Specify the hash of the component, which is usually not required.
# package_hash: 73cccf4d05508de0950ad1164aec03003c4ddbe1415530e031ac8b6469815fea # Specify the hash of the component, which is usually not required.
# tag: dev # Specify the tag of the component, which is usually not required.
# Please don't use hostname, only IP can be supported
ip:xxx.xxx.xxx.xxx
-name:server2
ip:xxx.xxx.xxx.xxx
-name:server3
ip:xxx.xxx.xxx.xxx
global:
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
home_path:/root/observer
# External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started.
mysql_port:2881
# Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started.
rpc_port:2882
# The maximum running memory for an observer. When ignored, autodeploy calculates this value based on the current server available resource.
memory_limit:64G
# The reserved system memory. system_memory is reserved for general tenants. The default value is 30G. Autodeploy calculates this value based on the current server available resource.
system_memory:30G
# Password for root. The default value is empty.
root_password:********
# Password for proxyro. proxyro_password must be the same as observer_sys_password. The default value is empty.
# Please don't use hostname, only IP can be supported
-192.168.1.2
-192.168.1.3
-192.168.1.4
global:
# The working directory for obagent. obagent is started under this directory. This is a required field.
home_path:/root/obagent
# The port that pulls and manages the metrics. The default port number is 8088.
server_port:8088
# Debug port for pprof. The default port number is 8089.
pprof_port:8089
# Log level. The default value is INFO.
log_level:INFO
# Log path. The default value is log/monagent.log.
log_path:log/monagent.log
# Encryption method. OBD supports aes and plain. The default value is plain.
crypto_method:plain
# Path to store the crypto key. The default value is conf/.config_secret.key.
# crypto_path: conf/.config_secret.key
# Size for a single log file. Log size is measured in Megabytes. The default value is 30M.
log_size:30
# Expiration time for logs. The default value is 7 days.
log_expire_day:7
# The maximum number for log files. The default value is 10.
log_file_count:10
# Whether to use local time for log files. The default value is true.
# log_use_localtime: true
# Whether to enable log compression. The default value is true.
# log_compress: true
# Username for HTTP authentication. The default value is admin.
http_basic_auth_user:******
# Password for HTTP authentication. The default value is root.
http_basic_auth_password:******
# Username for debug service. The default value is admin.
pprof_basic_auth_user:******
# Password for debug service. The default value is root.
pprof_basic_auth_password:******
# Monitor username for OceanBase Database. The user must have read access to OceanBase Database as a system tenant. The default value is root.
monitor_user:root
# Monitor password for OceanBase Database. The default value is empty. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the root_password in oceanbase-ce.
monitor_password:
# The SQL port for observer. The default value is 2881. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the mysql_port in oceanbase-ce.
sql_port:2881
# The RPC port for observer. The default value is 2882. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the rpc_port in oceanbase-ce.
rpc_port:2882
# Cluster name for OceanBase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the appname in oceanbase-ce.
cluster_name:obcluster
# Cluster ID for OceanBase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the cluster_id in oceanbase-ce.
cluster_id:1
# Monitor status for OceanBase Database. Active is to enable. Inactive is to disable. The default value is active. When you deploy an cluster automatically, OBD decides whether to enable this parameter based on depends.
ob_monitor_status:active
# Monitor status for your host. Active is to enable. Inactive is to disable. The default value is active.
host_monitor_status:active
# Whether to disable the basic authentication for HTTP service. True is to disable. False is to enable. The default value is false.
disable_http_basic_auth:false
# Whether to disable the basic authentication for the debug interface. True is to disable. False is to enable. The default value is false.
disable_pprof_basic_auth:false
# Synchronize the obagent-related information to the specified path of the remote host, as the targets specified by `file_sd_config` in the Prometheus configuration.
# For prometheus that depends on obagent, it can be specified to $home_path/targets of prometheus.
# For independently deployed prometheus, specify the files to be collected by setting `config` -> `scrape_configs` -> `file_sd_configs` -> `files`.For details, please refer to prometheus-only-example.yaml.
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
zone_name:zone1
192.168.1.3:
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
zone_name:zone2
192.168.1.4:
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
# The working directory for prometheus. prometheus is started under this directory. This is a required field.
home_path:/root/prometheus
config:# Configuration of the Prometheus service. The format is consistent with the Prometheus config file. Corresponds to the `config.file` parameter.
global:
scrape_interval:1s
evaluation_interval:10s
rule_files:
-"rules/*rules.yaml"
scrape_configs:
-job_name:prometheus
metrics_path:/metrics
scheme:http
static_configs:
-targets:
-'localhost:9090'
-job_name:node
basic_auth:
username:******
password:******
metrics_path:/metrics/node/host
scheme:http
static_configs:
-targets:
-192.168.1.2:8088
-job_name:ob_basic
basic_auth:
username:******
password:******
metrics_path:/metrics/ob/basic
scheme:http
static_configs:
-targets:
-192.168.1.2:8088
-job_name:ob_extra
basic_auth:
username:******
password:******
metrics_path:/metrics/ob/extra
scheme:http
static_configs:
-targets:
-192.168.1.2:8088
-job_name:agent
basic_auth:
username:******
password:******
metrics_path:/metrics/stat
scheme:http
static_configs:
-targets:
-192.168.1.2:8088
grafana:
servers:
-192.168.1.5
depends:
-prometheus
global:
home_path:/root/grafana
login_password:*********# Grafana login password. The default value is 'oceanbase'.
# The working directory for prometheus. prometheus is started under this directory. This is a required field.
home_path:/root/prometheus
config:# Configuration of the Prometheus service. The format is consistent with the Prometheus config file. Corresponds to the `config.file` parameter.
global:
scrape_interval:1s
evaluation_interval:10s
rule_files:
-"rules/*rules.yaml"
scrape_configs:
-job_name:prometheus
metrics_path:/metrics
scheme:http
static_configs:
-targets:
-'localhost:9090'
-job_name:node
basic_auth:
username:******
password:******
metrics_path:/metrics/node/host
scheme:http
file_sd_configs:
-files:
-targets/*.yaml
-job_name:ob_basic
basic_auth:
username:******
password:******
metrics_path:/metrics/ob/basic
scheme:http
file_sd_configs:
-files:
-targets/*.yaml
-job_name:ob_extra
basic_auth:
username:******
password:******
metrics_path:/metrics/ob/extra
scheme:http
file_sd_configs:
-files:
-targets/*.yaml
-job_name:agent
basic_auth:
username:******
password:******
metrics_path:/metrics/stat
scheme:http
file_sd_configs:
-files:
-targets/*.yaml
grafana:
servers:
-192.168.1.5
depends:
-prometheus
global:
home_path:/root/grafana
login_password:*********# Grafana login password. The default value is 'oceanbase'.