“2a68646c57a425eb808a4fb72c8f9c9d4b37c6ed”上不存在“lite/kernels/opencl/conv_buffer_compute_test.cc”
未验证 提交 db864e13 编写于 作者: R Rongfeng Fu 提交者: GitHub

Doc for V2 (#166)

* fix ocp-express example

* update V2.0.0 docs
上级 885d9694
...@@ -10,6 +10,8 @@ After you deploy OceanBase Deployer (OBD), you can run the `obd demo` command to ...@@ -10,6 +10,8 @@ After you deploy OceanBase Deployer (OBD), you can run the `obd demo` command to
- At least 54 GB of disk space is available on the server. - At least 54 GB of disk space is available on the server.
- Your server can be connected to the network, or there are installation packages required for deployment.
> **Note** > **Note**
> >
> If the foregoing prerequisites are not met, see [Use OBD to start an OceanBase cluster](../3.user-guide/2.start-the-oceanbase-cluster-by-using-obd.md). > If the foregoing prerequisites are not met, see [Use OBD to start an OceanBase cluster](../3.user-guide/2.start-the-oceanbase-cluster-by-using-obd.md).
......
...@@ -36,6 +36,8 @@ obd demo -c oceanbase-ce,obproxy-ce --obproxy-ce.home_path=/data/demo/ ...@@ -36,6 +36,8 @@ obd demo -c oceanbase-ce,obproxy-ce --obproxy-ce.home_path=/data/demo/
obd demo --oceanbase-ce.mysql_port=3881 obd demo --oceanbase-ce.mysql_port=3881
``` ```
For more information about the relevant configuration items in the configuration file, refer to [Configuration file description](../../4.configuration-file-description.md).
> **Notice** > **Notice**
> >
> This command supports only level-1 configurations under global that are specified by using options. > This command supports only level-1 configurations under global that are specified by using options.
# Mirror and repository commands # Mirror and repository commands
OBD provides multiple-level commands. You can use the`-h/--help` option to view the help information of sub-commands. OBD provides multiple-level commands. You can use the `-h/--help` option to view the help information of sub-commands. Similarly, you can also use `-v/--verbose` to view the detailed execution process of commands when the execution of sub commands reports an error.
## `obd mirror clone` ## obd mirror clone
Copy an RPM package to the local mirror repository. You can run the corresponding OBD cluster command to start the mirror. Copy an RPM package to the local mirror repository. You can run the corresponding OBD cluster command to start the mirror.
...@@ -14,7 +14,7 @@ obd mirror clone <path> [-f] ...@@ -14,7 +14,7 @@ obd mirror clone <path> [-f]
The `-f` option is `--force`. `-f` is optional. This option is disabled by default. If it is enabled and a mirror of the same name exists in the repository, the copied mirror will forcibly overwrite the existing one. The `-f` option is `--force`. `-f` is optional. This option is disabled by default. If it is enabled and a mirror of the same name exists in the repository, the copied mirror will forcibly overwrite the existing one.
## `obd mirror create` ## obd mirror create
Creates a mirror based on the local directory. When OBD starts a user-compiled open-source OceanBase software, you can run this command to add the compilation output to the local repository. Then, you can run the corresponding `obd cluster` command to start the mirror. Creates a mirror based on the local directory. When OBD starts a user-compiled open-source OceanBase software, you can run this command to add the compilation output to the local repository. Then, you can run the corresponding `obd cluster` command to start the mirror.
...@@ -22,19 +22,19 @@ Creates a mirror based on the local directory. When OBD starts a user-compiled o ...@@ -22,19 +22,19 @@ Creates a mirror based on the local directory. When OBD starts a user-compiled o
obd mirror create -n <component name> -p <your compile dir> -V <component version> [-t <tag>] [-f] obd mirror create -n <component name> -p <your compile dir> -V <component version> [-t <tag>] [-f]
``` ```
For example, you can [compile an OceanBase cluster based on the source code](https://www.oceanbase.com/en/docs/community-observer-en-10000000000209369). Then, you can run the `make DESTDIR=./ install && obd mirror create -n oceanbase-ce -V 3.1.0 -p ./usr/local` command to add the compilation output to the local repository of OBD. For example, you can [compile an OceanBase cluster based on the source code](https://en.oceanbase.com/docs/community-observer-en-10000000000209369). Then, you can run the `make DESTDIR=./ install && obd mirror create -n oceanbase-ce -V 3.1.0 -p ./usr/local` command to add the compilation output to the local repository of OBD.
This table describes the corresponding options. This table describes the corresponding options.
| Option | Required | Data type | Description | | Option | Required | Data type | Description |
--- | --- | --- |--- |----|-----|-----|----|
| -n/--name | Yes | string | The component name. If you want to compile an OceanBase cluster, set this option to oceanbase-ce. If you want to compile ODP, set this option to obproxy. | | -n/--name | Yes | string | The component name. If you want to compile an OceanBase cluster, set this option to oceanbase-ce. If you want to compile ODP, set this option to obproxy-ce. |
| -p/--path | Yes | string | The directory that stores the compilation output. OBD will automatically retrieve files required by the component from this directory. | | -p/--path | Yes | string | The directory that stores the compilation output. OBD will automatically retrieve files required by the component from this directory. |
| -V/--version | Yes | string | The component version. | | -V/--version | Yes | string | The component version. |
| -t/--tag | No | string | The mirror tags. You can define one or more tags for the created mirror. Separate multiple tags with commas (,). | | -t/--tag | No | string | The mirror tags. You can define one or more tags for the created mirror. Separate multiple tags with commas (,). |
| -f/--force | No | bool | Specifies whether to forcibly overwrite an existing mirror or tag. This option is disabled by default. | | -f/--force | No | bool | Specifies whether to forcibly overwrite an existing mirror or tag. This option is disabled by default. |
## `obd mirror list` ## obd mirror list
Shows the mirror repository or mirror list. Shows the mirror repository or mirror list.
...@@ -44,7 +44,7 @@ obd mirror list [mirror repo name] ...@@ -44,7 +44,7 @@ obd mirror list [mirror repo name]
`mirror repo name` specifies the mirror repository name. This parameter is optional. When it is not specified, all mirror repositories will be returned. When it is specified, only the specified mirror repository will be returned. `mirror repo name` specifies the mirror repository name. This parameter is optional. When it is not specified, all mirror repositories will be returned. When it is specified, only the specified mirror repository will be returned.
## `obd mirror update` ## obd mirror update
Synchronizes the information of all remote mirror repositories. Synchronizes the information of all remote mirror repositories.
...@@ -52,7 +52,7 @@ Synchronizes the information of all remote mirror repositories. ...@@ -52,7 +52,7 @@ Synchronizes the information of all remote mirror repositories.
obd mirror update obd mirror update
``` ```
## `obd mirror disable` ## obd mirror disable
Disable remote mirror repositories. To disable all the remote mirror repositories, run the `obd mirror disable remote` command. Disable remote mirror repositories. To disable all the remote mirror repositories, run the `obd mirror disable remote` command.
...@@ -62,7 +62,7 @@ obd mirror disable <mirror_repo_name> ...@@ -62,7 +62,7 @@ obd mirror disable <mirror_repo_name>
Parameter `mirror repo name` specifies the mirror repository name. When you specify `remote`, all the remote mirror repositories are disabled. Parameter `mirror repo name` specifies the mirror repository name. When you specify `remote`, all the remote mirror repositories are disabled.
## `obd mirror enable` ## obd mirror enable
Enable remote mirror repositories. Enable remote mirror repositories.
......
...@@ -2,6 +2,8 @@ ...@@ -2,6 +2,8 @@
OceanBase Deployer (OBD) provides a series of tool commands, including general commands that deliver a better experience for developers. OceanBase Deployer (OBD) provides a series of tool commands, including general commands that deliver a better experience for developers.
You can use the `-h/--help` option to view the help information of sub-commands. Similarly, you can also use `-v/--verbose` to view the detailed execution process of commands when the execution of sub commands reports an error.
## obd devmode enable ## obd devmode enable
You can run this command to enable the developer mode, which is a prerequisite for using other tool commands. After you enable the developer mode, OBD will downgrade the level of some exceptions and ignore some parameter exceptions. If you are not a kernel developer, use this command with caution. You can run this command to enable the developer mode, which is a prerequisite for using other tool commands. After you enable the developer mode, OBD will downgrade the level of some exceptions and ignore some parameter exceptions. If you are not a kernel developer, use this command with caution.
......
# Use OCP to take over a cluster deployed by OBD
This topic describes how to use OceanBase Cloud Platform (OCP) to take over a cluster deployed by OceanBase Deployer (OBD). The cluster named test, which is started by using the distributed-example.yaml configuration file, is used as an example.
## Prerequisites
- The OBD version is V1.3.0 or later.
- The OCP version is V3.1.1-ce or later.
## Modify the OceanBase cluster
### Check whether takeover conditions are met
Before using OCP to take over an OceanBase cluster deployed by OBD, run the following command to check whether takeover conditions are met. If the conditions are not met, modify the cluster based on prompts as follows:
```shell
obd cluster check4ocp <deploy-name>
# Example
obd cluster check4ocp test
```
For information about the `obd cluster check4ocp` command, see [obd cluster check4ocp](3.obd-command/1.cluster-command-groups.md).
### Configure IDC information
The configuration file of default style does not support the configuration of Internet Data Center (IDC) information. You need to use the new feature of OBD V1.3.0 to change the style of the configuration file to the cluster style.
Run the following command to change the style:
```shell
obd cluster chst <deploy name> --style <STYLE> [-c/--components]
# Example
obd cluster chst test -c oceanbase-ce --style cluster
```
For information about the `obd cluster chst` command, see [obd cluster chst](3.obd-command/1.cluster-command-groups.md).
After changing the style of the configuration file, run the following command to enter the edit mode and add IDC information for the zone.
```shell
obd cluster edit-config <deploy name>
# Example
obd cluster edit-config test
```
For information about the `obd cluster edit-config` command, see [obd cluster edit-config](3.obd-command/1.cluster-command-groups.md).
Configuration for reference:
```yaml
## Only need to configure when remote login is required
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
oceanbase-ce:
style: cluster
config:
devname: eth0
memory_limit: 64G
system_memory: 30G
datafile_disk_percentage: 20
syslog_level: INFO
enable_syslog_wf: false
enable_syslog_recycle: true
max_syslog_file_count: 4
skip_proxy_sys_private_check: true
enable_strict_kernel_release: false
mysql_port: 2881
rpc_port: 2882
home_path: /root/observer
root_password: xxxxxx
zones:
zone1:
idc: idc1
servers:
- name: server1
ip: xxx.xxx.xxx.xxx
zone2:
idc: idc2
servers:
- name: server2
ip: xxx.xxx.xxx.xxx
zone3:
idc: idc3
servers:
- name: server3
ip: xxx.xxx.xxx.xxx
```
Run the following command for the modification to take effect:
```shell
obd cluster reload <deploy name>
# Example
obd cluster reload test
```
For information about the `obd cluster reload` command, see [obd cluster reload](3.obd-command/1.cluster-command-groups.md).
### Configure the password
To use OCP to take over a cluster, you need to configure the password for the root user to connect to the cluster under the SYS tenant. Run the following command to enter the edit mode and use `root_passwd` to configure the password.
```shell
obd cluster edit-config <deploy name>
# Example
obd cluster edit-config test
```
Sample configuration file:
```yaml
## Only need to configure when remote login is required
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
oceanbase-ce:
servers:
- name: server1
# Please don't use hostname, only IP can be supported
ip: xxx.xxx.xxx.xxx
- name: server2
ip: xxx.xxx.xxx.xxx
- name: server3
ip: xxx.xxx.xxx.xxx
global:
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
home_path: /root/observer
# External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started.
mysql_port: 2881
# Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started.
rpc_port: 2882
# The maximum running memory for an observer. When ignored, autodeploy calculates this value based on the current server available resource.
memory_limit: 64G
# The reserved system memory. system_memory is reserved for general tenants. The default value is 30G. Autodeploy calculates this value based on the current server available resource.
system_memory: 30G
# Password for root. The default value is empty.
root_password: xxxxxx
# Password for proxyro. proxyro_password must be the same as observer_sys_password. The default value is empty.
# proxyro_password:
server1:
zone: zone1
server2:
zone: zone2
server3:
zone: zone3
```
The preceding shows a sample configuration file of the default style. For a configuration file of the cluster style, see the configuration example in **Configure IDC information**.
Run the following command for the modification to take effect:
```shell
obd cluster reload <deploy name>
# Example
obd cluster reload test
```
### Configure the user
OCP requires the process to be started by the admin user with the passwordless sudo permission. Therefore, you need to prepare an admin user as required. If this condition is already met, go to **Change the user**.
#### Create a user
On a server where OBServer is deployed, you can create the admin user as the root user.
```shell
# Create a user group
groupadd admin
# Create the admin user
useradd admin -g admin
```
Then, configure passwordless SSH logon for the admin user. For information about how to configure passwordless SSH logon, see [Use SSH to log on without a password](https://en.oceanbase.com/docs/community-observer-en-10000000000209361).
> **Note**
>
> 1. You need to configure passwordless SSH logon for the admin user.
>
> 2. A private key needs to be configured here, that is, `id_rsa`.
#### Grant the passwordless sudo permission to the admin user
Perform the following operations as the root user:
```yaml
# Add the write permission on the sudoers file.
chmod u+w /etc/sudoers
# vi /etc/sudoers
echo 'admin ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers
# Revoke the write permission on the sudoers file.
chmod u-w /etc/sudoers
```
#### Change the user
Run the following command to enter the edit mode and modify the user field.
```shell
obd cluster edit-config <deploy name>
# Example
obd cluster edit-config test
```
Sample configuration after modification:
```yaml
## Only need to configure when remote login is required
user:
username: admin
# password: your password if need
key_file: your ssh-key file path if need # Set it to the id_rsa file path of the admin user.
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
```
Run the following command for the modification to take effect:
```shell
obd cluster restart <deploy name>
# Example
obd cluster restart test --wp
```
For information about the `obd cluster restart` command, see [obd cluster restart](3.obd-command/1.cluster-command-groups.md).
### Multiple OBServers on a single server
OCP requires that one server have only one OBServer installed. At present, the scenario with multiple OBServers running on a single server is not supported. To use OCP to take over a cluster with multiple OBServers running on a single server, you need to keep only one OBServer running and stop other OBServers.
> **Note**
>
> After all the preceding operations are completed, you can run the `obd cluster check4ocp <deploy name>` command again to check whether takeover conditions are met. If not, you can make modifications based on prompts.
## Use OCP to take over the cluster
### Change the password of the proxyro user
Before using OCP to take over the cluster, check whether the password of the proxyro user in the cluster is the default value. If not, change the password of the proxyro user in OCP to that of the proxyro user in the cluster.
You can call an OCP API to change the password.
```bash
curl --user user:pass -X POST "http://ocp-site-url:port/api/v2/obproxy/password" -H "Content-Type:application/json" -d '{"username":"proxyro","password":"*****"}'
```
Note:
- `user:pass` represents the username and password of OCP. The caller must have the admin permissions.
- `password` after `-d` represents the password of the proxyro user in the cluster to be taken over.
This operation produces an O&M task to change the password of the proxyro user in the existing OceanBase cluster in OCP, as well as the corresponding configuration of the OBProxy cluster.
You can proceed with subsequent steps only after the O&M task succeeds. If the task fails, you need to try it again and ensure that it is successful so that you can execute the subsequent steps.
### Use OCP to take over the OceanBase cluster
You can directly take over the OceanBase cluster on the GUI of OCP. For detailed steps, see [Take over a cluster](https://en.oceanbase.com/docs/community-ocp-en-10000000000779629).
After using OCP to take over the OceanBase cluster, you need to create an OBProxy cluster and associate it with the OceanBase cluster that has been taken over. For detailed steps, see [Create an OBProxy cluster](https://en.oceanbase.com/docs/community-ocp-en-10000000000779538).
If original OBProxies use a virtual IP address (VIP), add the OBProxies created in OCP to the VIP one by one, and then delete the original OBProxies from the VIP one by one.
### FAQ
1. Why do I need to change the password of the proxyro user in OCP?
Typically, an OBProxy managed in OCP is started by ConfigServer and can theoretically connect to multiple OceanBase clusters. However, the password of the proxyro user can be changed only globally for OBProxies. This password is a global configuration in OCP. It is used by OBProxies to query metadata, and the change of it does not affect business tenants.
2. When I switch to a new OBProxy, can I reuse the original server?
If multiple OBProxies have been deployed and are accessed through a VIP, you can delete them from the VIP one by one, deploy new OBProxies in OCP by using the original servers, and add the new OBProxies back to the VIP, thereby reusing the servers.
3. Can I choose not to switch to a new OBProxy?
Yes, you can. The original OBProxy can still properly connect to the OceanBase cluster that has been taken over. However, we recommend that you create a new OBProxy in OCP to facilitate subsequent O&M management.
# Add GUI-based monitoring for an existing cluster
OceanBase Deployer (OBD) supports the deployment of Prometheus and Grafana since V1.6.0. This topic describes how to add GUI-based monitoring for a deployed cluster.
This topic describes three scenarios. You can refer to the descriptions based on the actual conditions of your cluster.
> **Note**
>
> The configuration examples in this topic are for reference only. For more information about the detailed configurations, go to the `/usr/obd/example` directory and view the examples of different components.
## Scenario 1: OBAgent is not deployed in the cluster
To add GUI-based monitoring for a cluster in which OBAgent is not deployed, you must create a cluster and deploy OBAgent, Prometheus, and Grafana in the cluster.
OBAgent is separately configured for collecting monitoring information of OceanBase Database. It is declared in the configuration file that Prometheus depends on OBAgent and that Grafana depends on Prometheus.
Sample configuration file:
```yaml
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
obagent:
servers:
# Please don't use hostname, only IP can be supported
- 192.168.1.2
- 192.168.1.3
- 192.168.1.4
global:
# The working directory for obagent. obagent is started under this directory. This is a required field.
home_path: /root/obagent
# The port that pulls and manages the metrics. The default port number is 8088.
server_port: 8088
# Debug port for pprof. The default port number is 8089.
pprof_port: 8089
# Log level. The default value is INFO.
log_level: INFO
# Log path. The default value is log/monagent.log.
log_path: log/monagent.log
# Encryption method. OBD supports aes and plain. The default value is plain.
crypto_method: plain
# Path to store the crypto key. The default value is conf/.config_secret.key.
# crypto_path: conf/.config_secret.key
# Size for a single log file. Log size is measured in Megabytes. The default value is 30M.
log_size: 30
# Expiration time for logs. The default value is 7 days.
log_expire_day: 7
# The maximum number for log files. The default value is 10.
log_file_count: 10
# Whether to use local time for log files. The default value is true.
# log_use_localtime: true
# Whether to enable log compression. The default value is true.
# log_compress: true
# Username for HTTP authentication. The default value is admin.
http_basic_auth_user: ******
# Password for HTTP authentication. The default value is root.
http_basic_auth_password: ******
# Username for debug service. The default value is admin.
pprof_basic_auth_user: ******
# Password for debug service. The default value is root.
pprof_basic_auth_password: ******
# Monitor username for OceanBase Database. The user must have read access to OceanBase Database as a system tenant. The default value is root.
monitor_user: root
# Monitor password for OceanBase Database. The default value is empty. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the root_password in oceanbase-ce.
monitor_password:
# The SQL port for observer. The default value is 2881. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the mysql_port in oceanbase-ce.
sql_port: 2881
# The RPC port for observer. The default value is 2882. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the rpc_port in oceanbase-ce.
rpc_port: 2882
# Cluster name for OceanBase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the appname in oceanbase-ce.
cluster_name: obcluster
# Cluster ID for OceanBase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the cluster_id in oceanbase-ce.
cluster_id: 1
# Monitor status for OceanBase Database. Active is to enable. Inactive is to disable. The default value is active. When you deploy an cluster automatically, OBD decides whether to enable this parameter based on depends.
ob_monitor_status: active
# Monitor status for your host. Active is to enable. Inactive is to disable. The default value is active.
host_monitor_status: active
# Whether to disable the basic authentication for HTTP service. True is to disable. False is to enable. The default value is false.
disable_http_basic_auth: false
# Whether to disable the basic authentication for the debug interface. True is to disable. False is to enable. The default value is false.
disable_pprof_basic_auth: false
# Synchronize the obagent-related information to the specified path of the remote host, as the targets specified by `file_sd_config` in the Prometheus configuration.
# For prometheus that depends on obagent, it can be specified to $home_path/targets of prometheus.
# For independently deployed prometheus, specify the files to be collected by setting `config` -> `scrape_configs` -> `file_sd_configs` -> `files`. For details, please refer to prometheus-only-example.yaml.
# target_sync_configs:
# - host: 192.168.1.1
# target_dir: /root/prometheus/targets
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
192.168.1.2:
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
zone_name: zone1
192.168.1.3:
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
zone_name: zone2
192.168.1.4:
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
zone_name: zone3
prometheus:
depends:
- obagent
servers:
- 192.168.1.5
global:
home_path: /root/prometheus
grafana:
depends:
- prometheus
servers:
- 192.168.1.5
global:
home_path: /root/grafana
login_password: oceanbase
```
After you modify the configuration file, run the following command to deploy and start a new cluster:
```bash
obd cluster deploy <new deploy name> -c new_config.yaml
obd cluster start <new deploy name>
```
After the cluster is started, go to the Grafana page as prompted. Then, you can view the monitoring information of the existing cluster.
## Scenario 2: OBAgent is deployed in the cluster
To add GUI-based monitoring for a cluster in which OBAgent is deployed, you must create a cluster and deploy Prometheus and Grafana in the cluster.
In this scenario, it cannot be declared that Prometheus depends on OBAgent. Therefore, you must manually associate them. Open the `conf/prometheus_config/prometheus.yaml` file in the installation directory of OBAgent in the existing cluster, and copy the corresponding configuration to the `conifg` parameter in the `global` section of the Prometheus settings. Sample configuration file:
```yaml
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
prometheus:
servers:
- 192.168.1.5
global:
# The working directory for prometheus. prometheus is started under this directory. This is a required field.
home_path: /root/prometheus
config: # Configuration of the Prometheus service. The format is consistent with the Prometheus config file. Corresponds to the `config.file` parameter.
global:
scrape_interval: 1s
evaluation_interval: 10s
rule_files:
- "rules/*rules.yaml"
scrape_configs:
- job_name: prometheus
metrics_path: /metrics
scheme: http
static_configs:
- targets:
- 'localhost:9090'
- job_name: node
basic_auth:
username: ******
password: ******
metrics_path: /metrics/node/host
scheme: http
static_configs:
- targets:
- 192.168.1.2:8088
- job_name: ob_basic
basic_auth:
username: ******
password: ******
metrics_path: /metrics/ob/basic
scheme: http
static_configs:
- targets:
- 192.168.1.2:8088
- job_name: ob_extra
basic_auth:
username: ******
password: ******
metrics_path: /metrics/ob/extra
scheme: http
static_configs:
- targets:
- 192.168.1.2:8088
- job_name: agent
basic_auth:
username: ******
password: ******
metrics_path: /metrics/stat
scheme: http
static_configs:
- targets:
- 192.168.1.2:8088
grafana:
servers:
- 192.168.1.5
depends:
- prometheus
global:
home_path: /root/grafana
login_password: oceanbase # Grafana login password. The default value is 'oceanbase'.
```
> **Note**
>
> In the preceding sample configuration file, the username and password of `basic_auth` must be the same as those of `http_basic_auth_xxx` in the configuration file of OBAgent.
After you modify the configuration file, run the following command to deploy a new cluster:
```bash
obd cluster deploy <new deploy name> -c new_config.yaml
```
After the deployment is completed, copy the `conf/prometheus_config/rules` directory in the installation directory of OBAgent to the installation directory of Prometheus.
Run the following command to start the new cluster:
```bash
obd cluster start <new deploy name>
```
After the cluster is started, go to the Grafana page as prompted. Then, you can view the monitoring information of the existing cluster.
> **Notice**
>
> 1. In the monitoring metrics of Prometheus in `scrape_configs`, `localhost:9090` must be modified based on the current listening address of Prometheus. If authentication is enabled for Prometheus, `basic_auth` must be specified. Here the listening address is the address of the server where Prometheus is deployed, namely, the address and port in the Prometheus configurations.
>
> 2. If the OBAgent node of the existing cluster changes, you must run the `obd cluster edit-config <new deploy name>` command to synchronize the content in the `conf/prometheus_config/prometheus.yaml` installation directory of OBAgent.
## Scenario 3: Monitor multiple clusters and dynamically synchronize OBAgent changes
To enable Prometheus to collect the monitoring information of multiple clusters or dynamically synchronize OBAgent changes, you can make a few changes on the basis of scenario 2.
Specifically, replace `static_configs` in Prometheus configurations with `file_sd_config` to obtain and synchronize the information about the OBAgent node. In the following example, all `.yaml` files in the `targets` directory of the installation directory (`home_path`) of Prometheus are collected.
> **Note**
>
> The `targets` directory will be created in the installation directory of Prometheus only if related parameters are configured for OBAgent in the configuration file of the existing cluster. For more information, see [Modify the configurations of a monitored cluster](#Modify%20the%20configurations%20of%20a%20monitored%20cluster).
```yaml
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
prometheus:
servers:
- 192.168.1.5
global:
# The working directory for prometheus. prometheus is started under this directory. This is a required field.
home_path: /root/prometheus
config: # Configuration of the Prometheus service. The format is consistent with the Prometheus config file. Corresponds to the `config.file` parameter.
global:
scrape_interval: 1s
evaluation_interval: 10s
rule_files:
- "rules/*rules.yaml"
scrape_configs:
- job_name: prometheus
metrics_path: /metrics
scheme: http
static_configs:
- targets:
- 'localhost:9090'
- job_name: node
basic_auth:
username: ******
password: ******
metrics_path: /metrics/node/host
scheme: http
file_sd_configs:
- files:
- targets/*.yaml
- job_name: ob_basic
basic_auth:
username: ******
password: ******
metrics_path: /metrics/ob/basic
scheme: http
file_sd_configs:
- files:
- targets/*.yaml
- job_name: ob_extra
basic_auth:
username: ******
password: ******
metrics_path: /metrics/ob/extra
scheme: http
file_sd_configs:
- files:
- targets/*.yaml
- job_name: agent
basic_auth:
username: ******
password: ******
metrics_path: /metrics/stat
scheme: http
file_sd_configs:
- files:
- targets/*.yaml
grafana:
servers:
- 192.168.1.5
depends:
- prometheus
global:
home_path: /root/grafana
login_password: oceanbase # Grafana login password. The default value is 'oceanbase'.
```
> **Note**
>
> In the preceding sample configuration file, the username and password of `basic_auth` must be the same as those of `http_basic_auth_xxx` in the configuration file of OBAgent.
After you modify the configuration file, run the following command to deploy a new cluster:
```bash
obd cluster deploy <new deploy name> -c new_config.yaml
```
After the deployment is completed, copy the `conf/prometheus_config/rules` directory in the installation directory of OBAgent to the installation directory of Prometheus.
Run the following command to start the new cluster:
```bash
obd cluster start <new deploy name>
```
After you deploy the new cluster, go to the Grafana page as prompted. At this time, you cannot view the monitoring information of monitored clusters. You must modify the OBAgent configurations of the monitored clusters.
### Modify the configurations of a monitored cluster
To create the `targets` directory in the installation directory of Prometheus, you must run the `obd cluster edit-config <deploy name>` command to modify the configuration file. Specifically, you must add the `target_sync_configs` parameter to the configuration file to point to the `targets` directory in the installation directory of Prometheus. By default, the user settings of the current cluster are used. If the user settings on the server where Prometheus is installed are inconsistent with the user settings in the configuration file of the current cluster, perform configuration based on the example.
```yaml
obagent:
servers:
# Please don't use hostname, only IP can be supported
- 192.168.1.2
- 192.168.1.3
- 192.168.1.4
global:
...
target_sync_configs:
- host: 192.168.1.5
target_dir: /root/prometheus/targets
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
...
```
After you modify the configuration file, restart the cluster as prompted. Then, go to the Grafana page and view the monitoring information of the existing cluster.
> **Notice**
>
> 1. In the monitoring metrics of Prometheus in `scrape_configs`, `localhost:9090` must be modified based on the current listening address of Prometheus. If authentication is enabled for Prometheus, `basic_auth` must be specified. Here the listening address is the address of the server where Prometheus is deployed, namely, the address and port in the Prometheus configurations.
>
> 2. The HTTP usernames and passwords that are collected by Prometheus must be consistent for all OBAgents. For any inconsistency, split the collection metrics.
...@@ -12,7 +12,7 @@ user: # The SSH login configuration. ...@@ -12,7 +12,7 @@ user: # The SSH login configuration.
timeout: ssh connection timeout (second), default 30 timeout: ssh connection timeout (second), default 30
oceanbase-ce: # The name of the component that is configured as follows. oceanbase-ce: # The name of the component that is configured as follows.
# version: 3.1.3 # Specify the version of the component, which is usually not required. # version: 3.1.3 # Specify the version of the component, which is usually not required.
# pacakge_hash: 589c4f8ed2662835148a95d5c1b46a07e36c2d346804791364a757aef4f7b60d # Specify the hash of the component, which is usually not required. # package_hash: 589c4f8ed2662835148a95d5c1b46a07e36c2d346804791364a757aef4f7b60d # Specify the hash of the component, which is usually not required.
# tag: dev # Specify the tag of the component, which is usually not required. # tag: dev # Specify the tag of the component, which is usually not required.
servers: # The list of nodes. servers: # The list of nodes.
- name: z1 # The node name, which can be left blank. The default node name is the same as the IP address if this name is left blank. The node name is z1 in this example. - name: z1 # The node name, which can be left blank. The default node name is the same as the IP address if this name is left blank. The node name is z1 in this example.
...@@ -62,7 +62,7 @@ oceanbase-ce: # The name of the component that is configured as follows. ...@@ -62,7 +62,7 @@ oceanbase-ce: # The name of the component that is configured as follows.
zone: zone3 zone: zone3
obproxy-ce: # The name of the component that is configured as follows. obproxy-ce: # The name of the component that is configured as follows.
# version: 3.2.3 # Specify the version of the component, which is usually not required. # version: 3.2.3 # Specify the version of the component, which is usually not required.
# pacakge_hash: 73cccf4d05508de0950ad1164aec03003c4ddbe1415530e031ac8b6469815fea # Specify the hash of the component, which is usually not required. # package_hash: 73cccf4d05508de0950ad1164aec03003c4ddbe1415530e031ac8b6469815fea # Specify the hash of the component, which is usually not required.
# tag: dev # Specify the tag of the component, which is usually not required. # tag: dev # Specify the tag of the component, which is usually not required.
servers: servers:
- 192.168.1.5 - 192.168.1.5
......
# 什么是 OBD # 什么是 OBD
OBD 全称为 OceanBase Deployer,是 OceanBase 开源软件的安装部署工具。OBD 同时也是包管理器,可以用来管理 OceanBase 所有的开源软件。 OBD 全称为 OceanBase Deployer,是 OceanBase 集群安装部署工具,通过命令行部署或白屏界面部署的方式,将复杂配置流程标准化,降低集群部署难度。详细操作请参考 [单机部署 OceanBase 数据库](4.user-guide/2.start-the-oceanbase-cluster-by-using-obd.md)[通过白屏部署 OceanBase 集群](2.quick-start/3.use-ui-deploy-oceanbase.md)
其中,命令行支持编辑配置文件,可以更加灵活的进行配置调整,适用于需要深度了解 OceanBase 的用户,有一定的使用门槛;白屏界面配置简单,通过页面的引导配置即可完成集群部署,适用于需要快速体验,构建标准环境的用户。
在集群部署之外,OBD 还提供了包管理器、压测软件、集群管理等常用的运维能力,更好的支持用户体验使用 OceanBase 分布式数据库。
# 安装并配置 OBD
本文介绍如何安装 OBD,以及安装成功后如何配置 OBD。
## 安装 OBD
### 使用 RPM 包安装
#### 在线安装
若您的机器可以连接网络,在 CentOS 或 RedHat 系统上,您可执行以下命令在线安装 OBD。
```shell
[admin@test001 ~]$ sudo yum install -y yum-utils
[admin@test001 ~]$ sudo yum-config-manager --add-repo https://mirrors.aliyun.com/oceanbase/OceanBase.repo
[admin@test001 ~]$ sudo yum install -y ob-deploy
[admin@test001 ~]$ source /etc/profile.d/obd.sh
```
<main id="notice" type='explain'>
<h4>说明</h4>
<p>YUM 命令默认安装最新版本,您可通过声明版本号安装指定版本,如使用 <code>yum install -y ob-deploy-1.6.2</code> 命令安装 OBD V1.6.2,推荐安装最新版本。</p>
</main>
#### 离线安装
若您的机器无法连接网络,您可从 [OceanBase 软件下载中心](https://www.oceanbase.com/softwarecenter) 下载所需版本的 OBD。下载完成后,将 OBD 复制到您的中控机器上。推荐您使用最新版本的安装包。
在 CentOS 或 RedHat 系统上,执行如下命令安装 OBD:
```shell
sudo yum install ob-deploy-*.rpm
```
在 Ubuntu 或 Debian 系统上,执行如下命令安装 OBD:
```shell
sudo alien --scripts -i ob-deploy-*.rpm
```
### 使用 all-in-one 安装包安装 OBD
OceanBase 从 V4.0.0 开始提供统一的安装包 all-in-one package。您可以通过这个统一的安装包一次性完成 OBD、OceanBase 数据库、ODP、OBAgent、Grafana、Prometheus 和 OCP Express(自 V4.1.0 起支持)的安装。
#### 在线安装
若您的机器可以连接网络,可执行如下命令在线安装。
```shell
[admin@test001 ~]$ bash -c "$(curl -s https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/download-center/opensource/oceanbase-all-in-one/installer.sh)"
[admin@test001 ~]$ source ~/.oceanbase-all-in-one/bin/env.sh
```
#### 离线安装
若您的机器无法连接网络,可参考如下步骤离线安装。
1.[OceanBase 软件下载中心](https://www.oceanbase.com/softwarecenter) 下载最新的 all-in-one 安装包,并将其复制到中控机中。推荐使用最新版本的安装包
2. 在安装包所在目录下执行如下命令解压安装包并安装。
```shell
[admin@test001 ~]$ tar -xzf oceanbase-all-in-one-*.tar.gz
[admin@test001 ~]$ cd oceanbase-all-in-one/bin/
[admin@test001 bin]$ ./install.sh
[admin@test001 bin]$ source ~/.oceanbase-all-in-one/bin/env.sh
```
### 使用源码安装
在使用源码安装 OBD 之前,请您确认已安装以下依赖:
* gcc
* wget
* python-devel
* openssl-devel
* xz-devel
* mysql-devel
源码编译安装 OBD 需要使用到 Python 2.7 环境和 Python 3.8 环境。安装步骤如下:
1. 在 Python 2.7 环境下执行以下命令。
```shell
sh rpm/build.sh executer
```
<main id="notice" type='explain'>
<h4>说明</h4>
<p>上述命令是为了编译 OceanBase 数据库升级所需的解释器,如果不使用升级功能可以跳过该命令。</p>
</main>
2. 在 Python3.8 环境下执行以下命令。
```shell
sh rpm/build.sh build_obd
source /etc/profile.d/obd.sh
```
<main id="notice" type='explain'>
<h4>说明</h4>
<p>OBD 自 V2.0.0 开始不支持在 Python2 环境下使用源码安装。</p>
</main>
## 配置 OBD
### 在线配置
OBD 自带仓库信息,当机器可以连接网络时,无须配置,执行 `obd mirror list` 命令即可查看 OBD 仓库中的软件。
```shell
# 查看 OBD 仓库信息
[admin@test001 ~]$ obd mirror list
+-----------------------------------------------------------------------------+
| Mirror Repository List |
+----------------------------+--------+---------+----------+------------------+
| SectionName | Type | Enabled | Avaiable | Update Time |
+----------------------------+--------+---------+----------+------------------+
| oceanbase.community.stable | remote | True | True | 2023-03-20 11:21 |
| oceanbase.development-kit | remote | True | True | 2023-03-20 11:21 |
| local | local | - | True | 2023-03-20 11:23 |
+----------------------------+--------+---------+----------+------------------+
Use `obd mirror list <section name>` for more details
Trace ID: 8a4da3a0-c6ce-11ed-91cc-00163e030166
If you want to view detailed obd logs, please run: obd display-trace 8a4da3a0-c6ce-11ed-91cc-00163e030166
# 查看对应仓库中的软件信息
[admin@test001 ~]$ obd mirror list oceanbase.community.stable
Update OceanBase-community-stable-el7 ok
Update OceanBase-development-kit-el7 ok
+--------------------------------------------------------------------------------------------------------------------------------------------------+
| oceanbase.community.stable Package List |
+-----------------------------------+---------+------------------------+--------+------------------------------------------------------------------+
| name | version | release | arch | md5 |
+-----------------------------------+---------+------------------------+--------+------------------------------------------------------------------+
| grafana | 7.5.17 | 1 | x86_64 | f0c86571a2987ee6338a42b79bc1a38aebe2b07500d0120ee003aa7dd30973a5 |
| libobclient | 2.0.0 | 1.el7 | x86_64 | 7bbb2aeb9c628ee35c79d6dc2c1668ebbf97a3323f902e8fd33ff1a5ea95220f |
| libobclient | 2.0.0 | 2.el7 | x86_64 | 6c1587b80df64b68343479aecddb0ca912d5ccd3d085cb41c7a37a1e38effc34 |
| libobclient | 2.0.1 | 3.el7 | x86_64 | 4f92926496dec89936422f41f2f2206eb61c5e62e7b0dde1006c6e02eaebec6e |
| libobclient | 2.0.2 | 2.el7 | x86_64 | eed33520e6911140dad65197cff53652310609ab79d7960ec4d2d6d4b2318ba7 |
# 省略后续输出
```
### 离线配置
当您的机器无法连接网络时,您需提前下载好所需软件的安装包,并将其添加到 OBD 的本地仓库。
下载地址:
* [Redhat/CentOS 7.x](https://mirrors.aliyun.com/oceanbase/community/stable/el/7/x86_64)
* [Redhat/CentOS 8.x](https://mirrors.aliyun.com/oceanbase/community/stable/el/8/x86_64)
<main id="notice" type='explain'>
<h4>说明</h4>
<p>请根据实际需求下载软件包,建议使用最新版本的软件。</p>
</main>
#### 将安装包添加到本地仓库
在部署运行 OBD 的操作系统用户下按照如下步骤操作,示例中使用的是用户 admin。
1. 执行如下命令禁用远程仓库
```shell
obd mirror disable remote
```
禁用后可通过 `obd mirror list` 命令进行确认,查看 Type=remote 对应的 `Enabled` 变成了 `False`,说明已关闭远程镜像源
2. 在安装包所在目录执行如下命令将下载好的安装包上传到本地仓库
```shell
obd mirror clone *.rpm
```
3. 查看本地仓库的安装包列表
```shell
obd mirror list local
```
在输出的列表中查看到部署所需安装包即表示上传成功。
# 快速启动 OceanBase 数据库
安装 OBD 后,您可执行 `obd demo` 命令快速启动本地单节点 OceanBase 数据库。在此之前您需要确认以下信息:
* `2881``2882` 端口没有被占用。
* 机器可用内存不低于 `6 G`
* 机器 CPU 数目不低于 `2`
* 机器可用磁盘空间不小于 `54 G`
* 您的机器可以联网,或者机器中有部署所需安装包。
<main id="notice" type='explain'>
<h4>说明</h4>
<p>如果以上条件不满足,您可参考文档 <a href="../4.user-guide/2.start-the-oceanbase-cluster-by-using-obd.md">单机部署 OceanBase 数据库</a></p>
</main>
```shell
# 部署并启动 OceanBase 数据库
obd demo
# 使用 OBClient 客户端连接到 OceanBase 数据库。
obclient -h127.0.0.1 -uroot -P2881
```
# 通过白屏部署 OceanBase 集群
本文以 x86 架构的 CentOS Linux 7.9 镜像作为环境介绍如何使用 OBD 白屏部署 OceanBase 数据库。
## 背景信息
OBD 自 V2.0.0 起支持白屏部署 OceanBase 数据库及相关组件,如 OBAgent、ODP、OCP Express 等,白屏界面配置简单,通过页面的引导配置即可完成单集群部署。
## 前提条件
* 仅部署 OceanBase 数据库,至少需要 2vCPU、8 GB 内存、45 GB 磁盘的可用资源。
* 部署 OceanBase 数据库及全部组件,至少需要 4vCPU、10 GB 内存、50 GB 磁盘的可用资源,推荐内存在 16 GB 以上。
* 部署 OCP Express 组件需先安装配置 Java 环境,目前仅支持 JDK1.8 版本。详细操作可参考 [常见问题](../5.faq/1.faq.md)**部署 OCP Express 前如何配置 Java 环境**
<main id="notice" type='notice'>
<h4>注意</h4>
<p>OBD 是通过 SSH 远程执行安装部署,所以您需通过 SSH 验证 Java 环境是否可用,详细操作请参考 <a href="#java%20%环境验证">Java 环境验证</a></p>
</main>
## 准备软件
使用 OBD 白屏部署 OceanBase 数据库时,可以选择在线部署或离线部署两种部署方式。
* 在线部署:OBD 所在机器需保证能访问外部网络,无需提前配置部署所需安装包,部署过程中 OBD 会从远程镜像仓库获取部署所需安装包。
* 离线部署:部署过程中无需访问外部网络,您需提前将部署所需安装包上传至 OBD 本地镜像库。选择离线部署时推荐直接下载所需版本的 all-in-one 安装包。
根据不同的部署方式有不同的准备软件方法,您可根据实际情况选择合适的方法准备软件。
### 在线部署
当您选择在线部署时,可以参考本节命令在中控机上安装 OBD。
```shell
[admin@test001 ~]$ sudo yum install -y yum-utils
[admin@test001 ~]$ sudo yum-config-manager --add-repo https://mirrors.aliyun.com/oceanbase/OceanBase.repo
[admin@test001 ~]$ sudo yum install -y ob-deploy
[admin@test001 ~]$ source /etc/profile.d/obd.sh
```
### 离线部署
当您选择离线部署时,可参考本节命令下载并安装 all-in-one 安装包。
您可从 [OceanBase 软件下载中心](https://www.oceanbase.com/softwarecenter) 下载最新的 all-in-one 安装包,并将其复制到中控机中。执行如下命令解压并安装:
```shell
[admin@test001 ~]$ tar -xzf oceanbase-all-in-one-*.tar.gz
[admin@test001 ~]$ cd oceanbase-all-in-one/bin/
[admin@test001 bin]$ ./install.sh
[admin@test001 bin]$ source ~/.oceanbase-all-in-one/bin/env.sh
```
## 操作步骤
1. 启动白屏界面
命令行执行 `obd web` 命令启动白屏界面,单击输出的地址访问白屏界面,在白屏界面中单击 **开启体验之旅** 即可进入到 OceanBase 数据库的配置界面。
```shell
[admin@test001 ~]$ obd web
start OBD WEB in 0.0.0.0:8680
please open http://172.xx.xxx.233:8680
```
<main id="notice" type='explain'>
<h4>说明</h4>
<ul>
<li>
<p>白屏界面默认使用 8680 端口,您可使用 <code>obd web -p &lt;PORT&gt;</code> 命令指定端口。</p>
</li>
<li>
<p>在阿里云或其他云环境下,可能出现程序无法获取公网 IP,从而输出内网地址的情况,此 IP 非公网地址,您需要使用正确的地址访问白屏界面。</p>
</li>
</ul>
</main>
2. 部署配置
**部署配置** 界面可以配置集群名称,部署类型和部署组件。
<img width="819.6" height="490.8" src="https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/doc/img/observer-enterprise/V4.1.0/4.deploy/5.deploy-oceanbase-database-community-edition/4.deploy-by-ui/3.use-ui-deploy-oceanbase-01.png" alt="部署配置">
其中:
* **集群名称** 默认为 `myoceanbase`,支持自定义,不可和已有部署名重复。
* **部署类型** 分为 **完全部署****精简部署****完全部署** 将部署所有组件,而 **精简部署** 只部署 OceanBase 数据库。
* 单击对应组件后的 **了解更多** 可跳转查看对应组件的文档介绍。
* 部署所需组件时,您可单击 **版本** 下的下拉框自行选择 OceanBase 数据库的版本,其他组件版本固定为最新版本。
配置完成之后单击 **下一步** 可进入到 **节点配置** 页面。
3. 节点配置
**节点配置** 界面可以配置数据库和组件节点,部署用户以及软件安装路径。
<img width="868.8" height="523.2" src="https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/doc/img/observer-enterprise/V4.1.0/4.deploy/5.deploy-oceanbase-database-community-edition/4.deploy-by-ui/3.use-ui-deploy-oceanbase-02.png" alt="节点配置">
其中:
* 数据库节点默认为三个 Zone,可通过单击 **+ 新增 Zone** 或尾部的删除图标新增或删除 Zone。
* **OCPExpress 节点** 即可从下选框选择 OBServer 节点 IP,也可输入新的节点 IP,仅支持选择或输入一个节点。
* **OBProxy 节点** 即可通过下选框选择 OBServer 节点 IP,也可输入新的节点 IP,支持配置多个节点。
* **用户名** 默认为当前进程的启动用户,支持自定义,需输入对应用户的密码,如各节点间已配置免密可免去输入密码。
配置完成之后单击 **下一步** 可进入到 **集群配置** 页面。
4. 集群配置
**集群配置** 界面可对集群进行配置,包括系统租户的管理员用户(root@sys)密码、数据和日志目录、数据库及各组件的端口和参数配置等。
<img width="788.4" height="549" src="https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/doc/img/observer-enterprise/V4.1.0/4.deploy/5.deploy-oceanbase-database-community-edition/4.deploy-by-ui/3.use-ui-deploy-oceanbase-03.png" alt="集群配置">
其中:
* **配置模式** 可选择 **最大占用****最小可用****最大占用** 模式将最大化利用环境资源,保证集群的性能与稳定性,推荐使用;**最小可用** 模式配置满足集群正常运行的资源参数。
* root@sys 密码默认为 OBD 自动生成的随机字符串,可自定义设置。支持数字、英文、特殊字符,长度 8~32 之内,特殊字符仅支持「~!@#%^&*_-+=`|(){}[]:;',.?/」
* 集群的数据目录和日志目录默认在 **节点配置** 页面中配置的软件路径下,需为以 `/` 开头的绝对路径,支持自定义设置,但需确保设置的目录为空。
* 数据库和各组件的端口均为默认值,可自定义设置(只支持 1024~65535 范围),需确保设置对的端口未被占用。
* 单击打开 **更多配置** 按钮查看对应的集群或组件参数,可使用自动分配的配置,也可自定义各个参数。
全部配置完成后,单击 **下一步** 即可进入到 **预检查** 页面。
5. 预检查
**预检查** 页面查看所有配置信息,若发现问题可单击 **上一步** 进行修改;确认无误后,单击 **预检查** 进行检查。
若预检查报错,您可根据页面建议选择 **自动修复** 或者单击 **了解更多方案** 跳转至错误码文档,参考文档自行修改。所有报错修改后,可单击 **重新检查** 再次进行预检查。
<img width="781.2" height="405.6" src="https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/doc/img/observer-enterprise/V4.1.0/4.deploy/5.deploy-oceanbase-database-community-edition/4.deploy-by-ui/3.use-ui-deploy-oceanbase-04.png" alt="预检查">
6. 部署
预检查通过后,单击 **部署** 即可开始 OceanBase 数据库的部署。
<img width="898.8" height="523.8" src="https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/doc/img/observer-enterprise/V4.1.0/4.deploy/5.deploy-oceanbase-database-community-edition/4.deploy-by-ui/3.use-ui-deploy-oceanbase-07.png" alt="部署">
其中:
* 部署成功后可复制显示的连接串,在黑屏界面执行连接 OceanBase 数据库。
* 单击输出的 OCPExpress 组件的连接串可跳转到 OCP Express 的登录界面,通过部署界面展示的账号密码登录并修改密码后可使用白屏界面管理集群。
<main id="notice" type='explain'>
<h4>说明</h4>
<p>在阿里云或其他云环境下,可能出现程序无法获取公网 IP,从而输出内网地址的情况,此 IP 非公网地址,您需要使用正确的地址访问白屏界面。</p>
</main>
* 将鼠标悬停在 **查看日志** 按钮后单击 **复制信息**,在命令行中粘贴该信息后执行可查看对应组件的日志位置。
7. 单击 **完成**,结束部署
<main id="notice" type='notice'>
<h4>注意</h4>
<p> 如需部署多个集群,您需在白屏界面单击 <b>完成</b> 结束当前 OBD 进程后才可再次执行 <code>obd web</code> 命令进行下一集群的部署。 </p>
</main>
## 相关操作
### Java 环境验证
由于 OBD 是通过远程执行脚本部署 OCP Express,所以需要通过 SSH 方式验证 Java 环境,直接在机器上执行 `java -verison` 可能无效。
<main id="notice" type='explain'>
<h4>说明</h4>
<p>使用终端交互会自动初始化环境变量,SSH 方式访问不会初始化环境变量,会导致使用 SSH 执行命令时出现 Java 命令不存在或使用错误 Java 版本的情况。</p>
</main>
您可在任意一台网络与 OCP Express 所在节点连通的机器上执行如下命令进行验证。
```shell
# ocp_express_node_username:ocp_express 所在节点的用户名
# ocp_express_node_ip:ocp_express 所在节点 IP
[admin@test001 ~]$ ssh <ocp_express_node_username>@<ocp_express_node_ip> 'java -version'
# 输出结果
openjdk version "1.8.0_xxx"
OpenJDK Runtime Environment (build 1.8.0_362-b08)
OpenJDK 64-Bit Server VM (build 25.362-b08, mixed mode)
```
如您已安装符合条件的 Java 但是验证却没有通过,可以通过以下任一方法解决:
* 方法一:通过组件页面 **更多配置** 配置 **java_bin** 路径
如下图,在配置项 `java_bin` 中配置 Java 的真实路径,如 `/jdk8/bin/java`
<img width="807.6" height="514.8" src="https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/doc/img/observer-enterprise/V4.1.0/4.deploy/5.deploy-oceanbase-database-community-edition/4.deploy-by-ui/3.use-ui-deploy-oceanbase-06.png" alt="更多配置">
* 方法二:将 Java 可执行文件软链接到 `/usr/bin/java`
```shell
[admin@test001 bin]$ pwd
/jdk8/bin
[admin@test001 bin]$ ln -s /jdk8/bin/java /usr/bin/java
```
### 管理部署后的集群
您可执行如下命令对 OBD 部署的集群进行管理。更多操作详见 [集群命令组](../3.obd-command/1.cluster-command-groups.md)
```shell
# 查看集群列表
[admin@test001 ~]$ obd cluster list
# 查看集群状态,以部署名为 myoceanbase 为例
[admin@test001 ~]$ obd cluster display myoceanbase
# 停止运行中的集群,以部署名为 myoceanbase 为例
[admin@test001 ~]$ obd cluster stop myoceanbase
# 销毁已部署的集群,以部署名为 myoceanbase 为例
[admin@test001 ~]$ obd cluster destory myoceanbase
```
### 部署特定版本组件
使用 all-in-one 安装包部署时,all-in-one 的包是基于 OceanBase 版本进行迭代,若包中有其他组件存在更新版本,您可从 [OceanBase 软件下载中心](https://www.oceanbase.com/softwarecenter) 下载最新版本的组件,参考如下步骤将其上传至本地镜像库,OBD 部署时会自动获取本地镜像看中的最新版本。
1. 进到组件安装包所在目录下,将安装包添加至本地镜像库
```shell
[admin@test001 rpm]$ obd mirror clone *.rpm
```
2. 查看本地镜像中安装包列表
```shell
[admin@test001 rpm]$ obd mirror list local
```
# 快速部署命令
## obd demo
使用该命令可在不传入配置文件的情况下直接在本机部署并启动指定的组件,固定部署名 `demo`,部署后使用命令 `obd cluster list` 查看集群列表时可以查看到该集群,也可以通过其他的集群命令进行管理,比如 `obd cluster display demo` 等。
```bash
obd demo [-c/--components]
```
选项说明见下表:
| 选项名 | 是否必选 | 数据类型 | 默认值 | 说明 |
|------------------|---------|------------|----------|--------------------------------------------------------------------|
| -c/--components | 否 | string | oceanbase-ce,obproxy-ce,obagent,prometheus,grafana | 组件列表,使用英文逗号(`,`)间隔。用于指定需要部署的组件。 |
该命令默认在当前用户的家目录下进行最小规格部署,部署的组件版本默认为最新版本。当前支持组件为:oceanbase-ce、obproxy-ce、obagent、grafana、Prometheus。
使用时可以通过选择控制部署版本和配置,比如:
```bash
# 部署指定组件版本
obd demo -c oceanbase-ce,obproxy-ce --oceanbase-ce.version=3.1.3
# 指定部署特定组件——hash
obd demo -c oceanbase-ce,obproxy-ce --oceanbase-ce.package_hash=f38723204d49057d3e062ffad778edc1552a7c114622bf2a86fea769fbd202ea
# 指定部署全部组件的安装路径
## 将 oceanbase-ce 和 obproxy-ce 部署到 /data/demo 下并根据组件建立对应的工作目录
obd demo -c oceanbase-ce,obproxy-ce --home_path=/data/demo
# 指定部署全部组件的安装路径
obd demo --home_path=/path
# 指定部署特定组件的安装路径
## 将 oceanbase-ce 部署到家目录下并根据组件建立对应的工作目录,而 obproxy-ce 部署到 /data/demo/obproxy-ce
obd demo -c oceanbase-ce,obproxy-ce --obproxy-ce.home_path=/data/demo/
# 指定自定义组件配置
## 指定 oceanbase-ce 组件的 mysql_port
obd demo --oceanbase-ce.mysql_port=3881
```
如需了解配置文件中的相关配置项可参考 [配置文件说明](../4.user-guide/1.configuration-file-description.md)
<main id="notice" type='notice'>
<h4>注意</h4>
<p>该命令只支持通过选项传入一级配置(即 global 下第一级配置)。</p>
</main>
此差异已折叠。
# 镜像和仓库命令组
OBD 有多级命令,您可以在每个层级中使用 `-h/--help` 选项查看子命令的帮助信息。同样的,当各层级的子命令执行报错时,您亦可使用 `-v/--verbose` 查看命令的详细执行过程。
## obd mirror clone
使用此命令可将一个 RPM 包复制到本地镜像库,之后您可以使用 OBD 集群中相关的命令启动镜像。
```shell
obd mirror clone <path> [-f]
```
参数 `path` 为 RPM 包的路径。
选项 `-f``--force``-f` 为可选选项,默认不开启。开启时,若镜像已经存在,则强制覆盖已有镜像。
## obd mirror create
使用该命令可以以本地目录为基础创建一个镜像。此命令主要用于使用 OBD 启动自行编译的 OceanBase 开源软件,您可以通过此命令将编译产物加入本地仓库,之后就可以使用 `obd cluster` 相关的命令启动这个镜像。
```shell
obd mirror create -n <component name> -p <your compile dir> -V <component version> [-t <tag>] [-f]
```
例如,如果您根据文档 [使用源码构建 OceanBase 数据库](https://www.oceanbase.com/docs/community-observer-cn-0000000000160092) 编译 OceanBase 数据库,在编译成功后,可以使用 `make DESTDIR=./ install && obd mirror create -n oceanbase-ce -V <component version> -p ./usr/local` 命令将编译产物添加至 OBD 本地仓库。
选项说明见下表:
| 选项名 | 是否必选 | 数据类型 | 说明 |
|--------------|------|--------|-------------------------------------------------------------------------|
| -n/--name | 是 | string | 组件名。如果您编译的是 OceanBase 数据库,则填写 `oceanbase-ce`。如果您编译的是 ODP,则填写 `obproxy-ce`。 |
| -p/--path | 是 | string | 执行编译命令时的目录。OBD 会根据组件自动从该目录下获取所需的文件,默认为当前目录。 |
| -V/--version | 是 | string | 版本号。 |
| -t/--tag | 否 | string | 镜像标签。您可以为您的创建的镜像定义多个标签,以英文逗号 `,` 间隔。 |
| -f/--force | 否 | bool | 当镜像已存在,或者标签已存在时强制覆盖。默认不开启。 |
## obd mirror list
使用该命令可显示镜像仓库或镜像列表。
```shell
obd mirror list [mirror repo name]
```
参数 `mirror repo name` 为镜像仓库名。该参数为可选参数。为空时将显示镜像仓库列表,不为空时则显示对应仓库的镜像列表。
## obd mirror update
使用该命令可同步全部远程镜像仓库的信息。
```shell
obd mirror update
```
## obd mirror disable
使用该命令可禁用远程镜像仓库。如若需要禁用所有远程镜像仓库,可执行命令 `obd mirror disable remote`
```shell
obd mirror disable <mirror repo name>
```
参数 `mirror repo name` 为需禁用的镜像仓库名。如果指定为 `remote`,则会禁用所有远程镜像仓库。
## obd mirror enable
使用该命令可启用远程镜像仓库。
```shell
obd mirror enable <mirror repo name>
```
参数 `mirror repo name` 为需启动的镜像仓库名。如果指定为 `remote`,则会启用所有远程镜像仓库。
## obd mirror add-repo
使用该命令可以添加远程镜像仓库文件。
```bash
obd mirror add-repo <url>
```
参数 `url` 为要添加的镜像文件下载地址。
此差异已折叠。
# 工具命令组
OBD 提供了一系列工具命令,其中封装了一些常用命令,可用于提升开发者的使用体验。
您可以在每个命令中使用 `-h/--help` 选项查看子命令的帮助信息。同样的,当命令执行报错时,您亦可使用 `-v/--verbose` 查看命令的详细执行过程。
## obd devmode enable
使用该命令可以开启开发者模式,是使用工具命令组的前提。开启开发者模式后,会出现部分异常报错被降级,OBD 忽略异常参数等情况。非内核开发人员请谨慎使用。
```shell
obd devmode enable
```
## obd devmode disable
使用该命令可关闭开发者模式。
```shell
obd devmode disable
```
## obd env show
使用该命令可展示 OBD 的环境变量。
```shell
obd env show
```
## obd env set
使用该命令可设置 OBD 的环境变量,这些环境变量会一定程度的影响 OBD 的表现,若没有特别需求不建议使用该命令。
```shell
obd env set [key] [value]
```
可设置的参数有:
* OBD_DISABLE_RSYNC:参数值可设置为 0 或 1,在符合条件的情况下 OBD 会使用 rsync 进行远程传输,当该环境变量为 1 时,禁止使用 rsync 命令。
* OBD_DEV_MODE::开发者模式是否开启,可选值为 0 或 1。
## obd env unset
使用该命令可删除指定环境变量。
```shell
obd env unset [key] [value]
```
## obd env clear
使用该命令可清理 OBD 的环境变量,请谨慎使用。
```shell
obd env clear
```
## obd tool command
使用该命令可执行一些常用的命令。
```shell
obd tool command <deploy name> <command> [options]
```
命令包含:
* pid:查看服务的 pid(非交互式命令)
* ssh:登录到目标 server 并进入 log 目录(交互式命令)
* less:查看目标服务的日志(交互式命令)
* gdb:gdb attach 到模板服务(交互式命令)
参数说明见下表
| 选项名 | 是否必选 | 数据类型 | 默认值 | 说明 |
|-----------------|------|--------|-------------------------------------------------------|---------------------------|
| -c/--components | 否 | string | 如果是交互式命令默认按照配置文件顺序选择第一个组件,如果是非交互式命令,则默认使用所有组件 | 需要执行命令的组件名。多个组件名以 `,` 相隔。 |
| -s/--servers | 否 | string | 如果是交互式命令默认按照配置文件顺序选择当前组件的第一个节点名,如果是非交互式命令,则默认使用所有可用节点 | 指定的组件下的节点名。多个节点名以 `,` 相隔。 |
## obd tool db_connect
使用该命令可建立数据库连接。
```shell
obd tool db_connect <deploy name> [options]
```
参数 `deploy name` 为部署集群名,可以理解为配置文件的别名。
参数说明见下表
| 选项名 | 是否必选 | 数据类型 | 默认值 | 说明 |
|---------------------|------|--------|---------------------------|-------------------------------------------------------------------|
| -c/--component | 否 | string | 默认按照配置文件顺序选择第一个组件 | 待连接的组件名。候选项为 `obproxy``obproxy-ce``oceanbase``oceanbase-ce`。 |
| -s/--server | 否 | string | 默认按照配置文件顺序选择当前组件的第一个节点名 | 指定的组件下的节点名。 |
| -u/--user | 否 | string | root | 数据库连接使用的用户名。 |
| -p/--password | 否 | string | 默认为空 | 数据库连接使用的密码。 |
| -t/--tenant | 否 | string | sys | 数据库连接使用的租户。 |
| -D/--database | 否 | string | 默认为空 | 数据库连接使用的数据库名称。 |
| --obclient-bin | 否 | string | obclient | OBClient 二进制文件路径。 |
## obd display-trace
使用该命令可展示任务 ID 对应日志详情,帮助用户快速查找日志。
```bash
obd display-trace <trace_id>
# example
obd display-trace 080b8ffc-9f7c-11ed-bec1-00163e030e58
```
参数 `trace_id` 为任务 ID,您执行其他 OBD 命令后会输出对应任务 ID,并输出命令提示如何查看。例如:
```bash
[admin@centos7 ~]$ obd cluster list
Local deploy is empty
Trace ID: a3cf9020-be42-11ed-b5b0-00163e030e58
If you want to view detailed obd logs, please run: obd display-trace a3cf9020-be42-11ed-b5b0-00163e030e58
```
# 配置文件说明
OBD 中的配置文件有其固定格式,下面结合示例讲解配置文件中不同模块的含义。
```yaml
# Only need to configure when remote login is required
user: # ssh 登录配置
username: your username
password: your password if need
key_file: your ssh-key file path if need
port: your ssh port, default 22
timeout: ssh connection timeout (second), default 30
oceanbase-ce: # 组件名,其下内容是对该组件的配置
# version: 3.1.3 # 指定组件版本,通常情况下不需要指定
# package_hash: 589c4f8ed2662835148a95d5c1b46a07e36c2d346804791364a757aef4f7b60d # 指定组件 hash,通常情况下不需要指定
# tag: dev # 指定组件 tag,通常情况下不需要指定
servers: # 节点列表
- name: z1 # name 后可不填,不填默认节点名与 IP 相同,这里指该节点名为 z1
# Please don't use hostname, only IP can be supported
ip: 192.168.1.2
- name: z2
ip: 192.168.1.3
- name: z3
ip: 192.168.1.4
global: # 全局配置,同一组件中相同的配置可以写在这里
# 如果节点的配置中有与全局配置相同的配置项,则使用节点的配置
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: eth0
# if current hardware's memory capacity is smaller than 50G, please use the setting of "mini-single-example.yaml" and do a small adjustment.
memory_limit: 64G
datafile_disk_percentage: 20
syslog_level: INFO
enable_syslog_wf: false
enable_syslog_recycle: true
max_syslog_file_count: 4
cluster_id: 1
# observer cluster name, consistent with obproxy's cluster_name
appname: obcluster
# root_password: # root user password, can be empty
# proxyro_password: # proxyro user pasword, consistent with obproxy's observer_sys_password, can be empty
# In this example, support multiple ob process in single node, so different process use different ports.
# If deploy ob cluster in multiple nodes, the port and path setting can be same.
z1: # 节点配置,这里是对 z1 节点的配置,也就是 192.168.1.2 这台服务器,节点的配置优先级是最高的。
mysql_port: 2881 # External port for OceanBase Database. The default value is 2881.
rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882.
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
home_path: /root/observer
zone: zone1
z2: # 节点配置,这里是对 z2 节点的配置,也就是 192.168.1.3 这台服务器。
mysql_port: 2881 # External port for OceanBase Database. The default value is 2881.
rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882.
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
home_path: /root/observer
zone: zone2
z3: # 节点配置,这里是对 z3 节点的配置,也就是 192.168.1.4 这台服务器。
mysql_port: 2881 # External port for OceanBase Database. The default value is 2881.
rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882.
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
home_path: /root/observer
zone: zone3
obproxy-ce: # 组件名,其下内容是对组件 obproxy 的配置
# version: 3.2.3 # 指定组件版本,通常情况下不需要指定
# package_hash: 73cccf4d05508de0950ad1164aec03003c4ddbe1415530e031ac8b6469815fea # 指定组件 hash,通常情况下不需要指定
# tag: dev # 指定组件 tag,通常情况下不需要指定
servers:
- 192.168.1.5
global:
listen_port: 2883 # External port. The default value is 2883.
prometheus_listen_port: 2884 # The Prometheus port. The default value is 2884.
home_path: /root/obproxy
# oceanbase root server list
# format: ip:mysql_port;ip:mysql_port
rs_list: 192.168.1.2:2881;192.168.1.3:2881;192.168.1.4:2881
enable_cluster_checkout: false
# observer cluster name, consistent with oceanbase-ce's appname
cluster_name: obcluster
# obproxy_sys_password: # obproxy sys user password, can be empty
# observer_sys_password: # proxyro user pasword, consistent with oceanbase-ce's proxyro_password, can be empty
```
# 单机部署 OceanBase 数据库
本文以单机部署为例介绍如何使用 OBD 部署 OceanBase 数据库。单机部署仅包含一个 Zone 并且 Zone 中只有一个 OBServer 节点。
## 名词解释
* 中控机器:存储 OceanBase 数据库安装包和集群配置信息的机器。
* 目标机器:安装 OceanBase 集群的集群。
## 前提条件
* 您的机器中已安装 OBD,建议安装最新版本,详细信息请参考 [安装并配置 OBD](../2.quick-start/1.install-obd.md)
* 您的机器中已安装 OBClient 客户端,详细信息请参考 [OBClient 文档](https://github.com/oceanbase/obclient/blob/master/README.md)
## 操作步骤
### 步骤一:(可选)下载并安装 all-in-one 安装包
OceanBase 从 V4.0.0 开始提供统一的安装包 all-in-one package。您可以通过这个统一的安装包一次性完成 OBD、OceanBase 数据库、ODP、OBAgent、Grafana、Prometheus 和 OCP Express(自 V4.1.0 起支持) 的安装。
您也可根据实际需求从 [OceanBase 软件下载中心](https://www.oceanbase.com/softwarecenter) 选择部分组件下载安装或者指定组件的版本。
<main id="notice" type='explain'>
<h4>说明</h4>
<p>离线部署 OceanBase 数据库时建议下载 all-in-one 安装包进行部署。</p>
</main>
#### 在线安装
若您的机器可以连接网络,可执行如下命令在线安装。
```shell
[admin@test001 ~]$ bash -c "$(curl -s https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/download-center/opensource/oceanbase-all-in-one/installer.sh)"
[admin@test001 ~]$ source ~/.oceanbase-all-in-one/bin/env.sh
```
#### 离线安装
若您的机器无法连接网络,可参考如下步骤离线安装。
1.[OceanBase 软件下载中心](https://www.oceanbase.com/softwarecenter) 下载最新的 all-in-one 安装包,并将其复制到中控机任意目录下。
2. 在 all-in-one 安装包所在目录下执行如下命令解压安装包并安装。
```shell
[admin@test001 ~]$ tar -xzf oceanbase-all-in-one-*.tar.gz
[admin@test001 ~]$ cd oceanbase-all-in-one/bin/
[admin@test001 bin]$ ./install.sh
[admin@test001 bin]$ source ~/.oceanbase-all-in-one/bin/env.sh
```
### 步骤二:配置 OBD
部署 OceanBase 集群之前,为了数据安全,建议您切换到非 root 用户。
如果是离线部署 OceanBase 集群,可参考 **步骤一** 在中控机上下载并安装 all-in-one 安装包。
如果对部署所需组件版本有特别要求,可从 [OceanBase 软件下载中心](https://www.oceanbase.com/softwarecenter) 自行下载组件对应版本安装包,复制到中控机任一目录,在该目录下参考以下步骤配置 OBD。
如果是在线部署 OceanBase 集群,则跳过步骤 1~3。
1. 禁用远程仓库
```shell
[admin@test001 rpm]$ obd mirror disable remote
```
<main id="notice" type='explain'>
<h4>说明</h4>
<p>安装 all-in-one 安装包后默认关闭远程仓库,您可通过 <code>obd mirror list</code> 命令进行确认,查看 Type=remote 对应的 Enabled 变成了 False,说明已关闭远程镜像源。</p>
</main>
2. 将安装包添加至本地镜像
```shell
[admin@test001 rpm]$ obd mirror clone *.rpm
```
3. 查看本地镜像中安装包列表
```shell
[admin@test001 rpm]$ obd mirror list local
```
4. 选择配置文件
若您机器中的 OBD 是通过直接下载的方式安装,则可在 `/usr/obd/example` 目录下查看 OBD 提供的配置文件示例。
若您机器中的 OBD 是通过解压 all-in-one 安装包的方式安装,则可在 `~/.oceanbase-all-in-one/conf` 目录下查看 OBD 提供的配置文件示例。请根据您的资源条件选择相应的配置文件。
小规格开发模式,适用于个人设备(内存不低于 8 GB)
* 本地单机部署配置样例:mini-local-example.yaml
* 单机部署配置样例:mini-single-example.yaml
* 单机部署 + ODP 配置样例:mini-single-with-obproxy-example.yaml
* 分布式部署 + ODP 配置样例:mini-distributed-with-obproxy-example.yaml
* 分布式部署 + ODP + OCP Express 配置样例:all-components-min.yaml
专业开发模式,适用于高配置 ECS 或物理服务器(可用资源不低于 16 核 64 GB)
* 本地单机部署配置样例:local-example.yaml
* 单机部署配置样例:single-example.yaml
* 单机部署 + ODP 配置样例:single-with-obproxy-example.yaml
* 分布式部署 + ODP 配置样例:distributed-with-obproxy-example.yaml
* 分布式部署 + ODP + OCP Express 配置样例:all-components.yaml
5. 修改配置文件
此处以小规格开发模式-单机部署(mini-single-example.yaml)为例,介绍如何修改配置文件。
<main id="notice" type='explain'>
<h4>说明</h4>
<p>您需根据您环境的真实情况修改下述参数。</p>
</main>
1. 修改用户信息
```yaml
## Only need to configure when remote login is required
user:
username: admin
# password: your password if need
key_file: /home/admin/.ssh/id_rsa
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
```
`username` 为登录到目标机器的用户名,确保您的用户名有 `home_path` 的写权限。`password` 和 `key_file` 均用于验证用户,通常情况下只需要填写一个。
<main id="notice" type='notice'>
<h4>注意</h4>
<p>在配置秘钥路径后,如果您的秘钥不需要口令,请注释或者删除 <code>password</code>,以免 <code>password</code> 被视为秘钥口令用于登录,导致校验失败。。</p>
</main>
2. 修改机器的 IP、端口和相关目录,并配置内存相关参数及密码
```yaml
oceanbase-ce:
servers:
# Please don't use hostname, only IP can be supported
- ip: 10.10.10.1
global:
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: eth0
memory_limit: 6G # The maximum running memory for an observer
system_memory: 1G # The reserved system memory. system_memory is reserved for general tenants. The default value is 30G.
datafile_size: 20G # Size of the data file.
log_disk_size: 24G # The size of disk space used by the clog files.
cpu_count: 16
mysql_port: 2881 # External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started.
rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started.
production_mode: false
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
home_path: /home/admin/observer
# The directory for data storage. The default value is $home_path/store.
data_dir: /data
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
redo_dir: /redo
root_password: ****** # root user password, can be empty
proxyro_password: ****** # proxyro user pasword, consistent with obproxy's observer_sys_password, can be empty
zone: zone1
```
### 步骤三:部署 OceanBase 数据库
<main id="notice" type='explain'>
<h4>说明</h4>
<p>本节中所使用命令的详细使用方法可参考 <a href="../3.obd-command/1.cluster-command-groups.md">集群命令组</a></p>
</main>
1. 部署 OceanBase 数据库
```shell
[admin@test001 ~]$ obd cluster deploy obtest -c all-components.yaml
```
联网情况下,在您执行了 `obd cluster deploy` 命令之后,OBD 将检查您的目标机器是否有部署所需安装包。如果没有安装包,OBD 将自动从 YUM 源获取。
此命令会检查 `home_path``data_dir` 指向的目录是否为空,目录不为空时将报错。若您确认该目录下的内容可全部删除,可以加上 `-f` 选项,强制清空目录。
2. 启动 OceanBase 数据库
```shell
[admin@test001 ~]$ obd cluster start obtest
```
3. 查看 OceanBase 集群状态
```shell
# 查看 OBD 管理的集群列表
[admin@test001 ~]$ obd cluster list
# 查看 obtest 集群状态
[admin@test001 ~]$ obd cluster display obtest
```
4. (可选)修改集群配置
OceanBase 数据库有数百个配置项,有些配置是耦合的,在您熟悉 OceanBase 数据库之前,不建议您修改示例配件文件中的配置。此处示例用来说明如何修改配置,并使之生效。
```shell
# 使用 edit-config 命令进入编辑模式,修改集群配置
# 修改配置并保存退出后,OBD 会告知如何使得此次修改生效,复制 OBD 输出的命令即可
[admin@test001 ~]$ obd cluster edit-config obtest
# 保存修改后输出如下
Search param plugin and load ok
Search param plugin and load ok
Parameter check ok
Save deploy "obtest" configuration
Use `obd cluster reload obtest` to make changes take effect.
[admin@test001 ~]$ obd cluster reload obtest
```
### 步骤四:连接 OceanBase 数据库
运行以下命令,使用 OBClient 客户端连接 OceanBase 数据库:
```shell
obclient -h<IP> -P<PORT> -uroot@sys -p
```
其中,`IP` 为 OBServer 节点的 IP 地址;`PORT` 为连接 OceanBase 数据库的的端口,直连时为 `mysql_port` 配置项的值,默认端口为 `2881`,如果您对端口做了修改,此处使用您实际配置的端口号。
<main id="notice" type='explain'>
<h4>说明</h4>
<p>部署 OceanBase 集群之后,建议创建业务租户进行业务操作。sys 租户仅做集群管理使用,不适合在业务场景中使用。如何创建租户,详细操作请参考 <a href="https://www.oceanbase.com/docs/community-observer-cn-0000000000965467">创建用户租户</a></p>
</main>
## 后续操作
您可执行如下命令对 OBD 部署的集群进行管理。更多操作详见 [集群命令组](../3.obd-command/1.cluster-command-groups.md)
```shell
# 查看集群列表
obd cluster list
# 查看集群状态,以部署名为 obtest 为例
obd cluster display obtest
# 停止运行中的集群,以部署名为 obtest 为例
obd cluster stop obtest
# 销毁已部署的集群,以部署名为 obtest 为例
obd cluster destory obtest
```
# 使用命令行部署 OCP Express
本文根据环境中是否存在 OceanBase 集群分为两种场景介绍如何使用命令行部署 OCP Express。
<main id="notice" type='notice'>
<h4>注意</h4>
<p>OCP Express 要求集群中所有的 OBServer 节点都要有对应的 OBAgent,否则 OCP Express 不能正常工作。</p>
</main>
## 前提条件
* OceanBase:V4.0.0.0 及以上版本。
* OBAgent:V1.3.0 及以上版本。
* OBD:V2.0.0 及以上版本,若您环境中 OBD 为 V2.0.0 以下版本,可参考 [常见问题](../5.faq/1.faq.md)**如何升级 OBD** 一节升级 OBD。
* Java:部署 OCP Express 的机器上需先安装配置 Java 环境,目前仅支持 JDK1.8 版本。详细操作可参考 [常见问题](../5.faq/1.faq.md)**部署 OCP Express 前如何配置 Java 环境**
* 内存:需为 OCP Express 预留 512 MB 以上内存,长期稳定运行建议 762 MB 以上。
<main id="notice" type='explain'>
<h4>说明</h4>
<p>本文默认所使用的机器可以连接公网,或者已配置好所需软件(OceanBase、OBAgent、ODP、OCP Express 等)。配置所需软件的详细操作可参考 <a href="../2.quick-start/1.install-obd.md"> 安装并配置 OBD</a><b>配置 OBD</b></p>
</main>
## OCP Express 资源规格
### OCP Express 配置要求
OCP Express 服务在运行过程中会产生大量的计算和存储开销,因此需要根据待管理的对象规格进行资源规划。下表以每个集群中包含 10 个租户为标准,计算出 OCP Express 机器的资源消耗。您可根据实际情况进行计算,选择合适的资源配置。
| 管理的机器数量(台)| CPU(核)| 内存(GB)| 磁盘(GB)|
|-------------------|---------|-----------|----------|
| ≤ 10 | 1 | 2 | 20 |
| ≤ 20 | 2 | 4 | 20 |
| ≤ 50 | 4 | 8 | 20 |
<main id="notice" type='explain'>
<h4>说明</h4>
<p>OCP Express 机器的最终配置请结合上述开销数据与系统其他资源消耗综合规划,基于系统稳定性考虑,建议 OCP Express 机器的最低配置为 4 核 8 GB,当单个主机的租户数量 ≤ 10 时,仍建议 CPU 和内存保持为 4 核 8 GB。</p>
</main>
### MetaDB 资源
MetaDB 用于存储 OCP Express 的元数据和监控数据,推荐在 OceanBase 集群中创建独立的租户用于 Meta DB。
OCP Express 中管理的 OBServer 数量不同,所需的资源也不相同。下表以每个集群中包含 10 个租户为标准,计算出 MetaDB 租户每个副本的 CPU、内存和磁盘资源。您可根据实际情况进行计算,选择合适的资源配置。
| 管理的机器数量(台)| CPU(核)| 内存(GB)| 磁盘(GB)|
|-------------------|---------|-----------|----------|
| ≤ 10 | 1 | 4 | 50 |
| ≤ 20 | 2 | 8 | 100 |
| ≤ 50 | 4 | 16 | 200 |
<main id="notice" type='explain'>
<h4>说明</h4>
<p>这里给出的资源消耗只是一个粗略的预估,实际的 MetaDB 资源消耗根据业务用量会有差异。</p>
</main>
## 场景一:部署 OceanBase 集群和 OCP Express
若您要同时部署 OceanBase 集群和 OCP Express,需在 oceanbase-ce 组件下配置 OCP Express 使用所需的租户信息。ocp-express 组件的配置可直接在配置文件中声明依赖于其他组件(oceanbase-ce、obproxy-ce、obagent),此时您只需配置 `home_path``memory_size`,其余参数会根据依赖的组件进行获取补全。
oceanbase-ce 组件下相关配置如下:
```yaml
oceanbase-ce:
servers:
- name: server1
ip: 172.xx.xxx.4
global:
home_path: xxxx
...
ocp_meta_tenant:
tenant_name: ocp_meta
max_cpu: 2
memory_size: 6442450944
ocp_meta_db: ocp_meta
ocp_meta_username: ocp_user
ocp_meta_password:
...
```
| 配置项 | 是否必选 | 描述 |
|--------------------|----------|--------------------------------------|
| ocp_meta_tenant->tenant_name | 可选 | 为 OCP Express 创建的 meta 租户名称。 |
| ocp_meta_tenant->max_cpu | 可选 | 为 meta 租户分配的最大 CPU。 |
| ocp_meta_tenant->memory_size | 可选 | 为 meta 租户分配的内存容量。 |
| ocp_meta_db | 可选 | OCP Express 中数据存储所需的数据库。 |
| ocp_meta_username | 可选 | OCP Express 的 meta 用户名。 |
| ocp_meta_password | 可选 | 用户密码。 |
这里举例列举了几个重要配置,`ocp_meta_tenant` 下的配置都会作为创建租户时的参数传入。具体有哪些参数可以参考创建租户命令支持的参数,详情请参见 [集群命令组](../3.obd-command/1.cluster-command-groups.md)`obd cluster tenant create` 命令介绍。
<main id="notice" type='explain'>
<h4>说明</h4>
<p>上述配置项都不配置的情况下,OBD 会按照默认规格为 OCP Express 创建 meta 租户。但是由于用户的租户数量是无法预知的,所以推荐用户根据需要自行修改 meta 租户的规格,推荐的计算公式如下:</p>
<ul>
<li>
<p>data_disk=3.5 + 节点数*0.5,单位为 GB。</p>
</li>
<li>
<p>log_disk=4.5 + 节点数*0.5 + 租户数*0.5,单位为 GB。</p>
</li>
</ul>
</main>
在 oceanbase-ce 组件的 bootstrap 阶段,如果发现用户配置了上述字段或者当前集群中存在 ocp-express 组件,OBD 会自动创建对应的 meta 租户以及用户。
ocp-express 组件配置如下:
```yaml
ocp-express:
depends:
- oceanbase-ce
- obproxy-ce
- obagent
servers:
- name: server2
ip: 172.xx.xxx.5
global:
# The working directory for ocp-express. ocp-express is started under this directory. This is a required field.
home_path: /home/oceanbase/ocp-server
# log_dir: /home/oceanbase/ocp-server/log # The log directory of ocp express server. The default value is {home_path}/log.
memory_size: 1G # The memory size of ocp-express server. The recommend value is 512MB * (expect node num + expect tenant num) * 60MB.
logging_file_total_size_cap: 10GB # The total log file size of ocp-express server
```
各配置项介绍如下表:
| 配置项 | 是否必选 | 描述 |
|--------------|----------|--------------------------------------|
| home_path | 必选 | OCP Express 的工作目录,OCP Express 在该目录下启动。 |
| memory_size | 必选 | OCP Express 服务器的内存容量,推荐算法为:memory_size = 512MB +(期望的节点数 * 期望的租户数)* 60MB </br>期望的租户数需要包含 sys 和 ocp meta 租户本身。 |
| logging_file_total_size_cap | 可选 | 日志文件总大小,默认为 1GB。<blockquote>**注意**</br>该参数的单位需使用 GB 或 MB,若使用 G 或 M 单位会报错,无法成功部署 OCP Express。</blockquote> |
修改配置文件之后,执行如下命令部署并启动集群:
```shell
# 部署集群
obd cluster deploy <deploy name> -c config.yaml
# 启动集群
obd cluster start <deploy name>
```
集群启动后,根据输出的 `ocp-express` 的登录地址和账号密码登录 OCP Express,首次登录会提示修改密码,修改后按照修改的账号密码登录即可使用 OCP Express。
## 场景二:为 OceanBase 集群增加 OCP Express
若您本身已经部署了 OceanBase 集群,想要单独增加 OCP Express,可参考本节内容单独部署。
1. (可选)若集群中 OBAgent 版本低于 V1.3.0,需参考如下命令升级 OBAgent。
```shell
# 查看 OBAgent 的 hash 值
obd mirror list oceanbase.community.stable | grep -e " obagent "
# 升级 OBAgent
obd cluster upgrade test -c obagent -V 1.3.0 --usable=<obagent_hash>
```
您需将命令中的 `obagent_hash` 替换为 OBAgent 对应版本的 hash 值。
2. 为 OCP Express 创建一个 meta 租户、用户和数据库,并为用户授予所需权限。
1. 创建 meta 租户
```shell
obd cluster tenant create <deploy name> -n <tenant_name> --max-cpu=2 --memory-size=4G --log-disk-size=3G --max-iops=10000 --iops-weight=2 --unit-num=1 --charset=utf8
```
`obd cluster tenant create` 命令的详细介绍可参考 [集群命令组](../3.obd-command/1.cluster-command-groups.md)`obd cluster tenant create` 命令介绍。
您也可以登入 OceanBase 数据库中为 OCP Express 创建租户,详细操作请参考 [创建用户租户](https://www.oceanbase.com/docs/community-observer-cn-0000000000965467)
2. 创建用户并授予权限
```shell
create user <ocp_user> identified by '<ocp_password>';
grant all on *.* to <ocp_user>;
```
3. 创建数据库
```shell
create database <database_name>;
```
3. 修改配置文件
具体配置文件示例如下:
```yaml
## Only need to configure when remote login is required
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
ocp-express:
servers:
- name: server1
ip: xxx.xxx.xxx.xxx
global:
# The working directory for ocp-express. ocp-express is started under this directory. This is a required field.
home_path: /home/oceanbase/ocp-server
log_dir: /home/oceanbase/ocp-server/log # The log directory of ocp express server. The default value is {home_path}/log.
memory_size: 1G # The memory size of ocp-express server. The recommend value is 512MB * (expect node num + expect tenant num) * 60MB.
jdbc_url: jdbc:oceanbase://xxx.xxx.xxx.xxx:2881/meta_db # jdbc connection string to connect to the meta db
jdbc_username: username # username to connect to meta db
jdbc_password: '<meta db password>' # password to connect to meta db
port: 8180 # The http port to use.
cluster_name: obcluster # the cluster name of oceanbase cluster. Refer to the configuration item appname of oceanbase
ob_cluster_id: 1 # the cluster id of oceanbase cluster. Refer to the configuration item cluster_id of oceanbase
root_sys_password: <password for root@sys>
agent_username: <obagent> # The username of obagent
agent_password: <password> # The password of obagent
logging_file_total_size_cap: 10GB # The total log file size of ocp-express server
server_addresses: # The cluster info for oceanbase cluster
- address: 127.0.0.1 # The address of oceanbase server
svrPort: 2882 # The rpc port of oceanbase server
sqlPort: 2881 # The mysql port of oceanbase server
agentMgrPort: 8089 # The port of obagent manager process
agentMonPort: 8088 # The port of obagent monitor process
```
| 配置项 | 是否必选 | 描述 |
|--------------------|----------|--------------------------------------|
| home_path | 必选 | OCP Express 的工作目录,OCP Express 在该目录下启动。 |
| log_dir | 可选 | OCP Express 服务器的日志目录,默认值为 `home_path` 参数下的 log。 |
| memory_size | 必选 | OCP Express 服务器的内存容量,推荐算法为:memory_size = 512M +(期望的节点数 * 期望的租户数)* 60MB </br>期望的租户数需要包含 sys 和 ocp meta 租户本身。 |
| jdbc_url | 必选 | 连接 meta 租户的 JDBC 连接字符串,请确保连接串中使用到的数据库已创建。 |
| jdbc_username | 必选 | 连接 meta 租户的用户名,请确保该用户已创建。<blockquote> **说明**</br> 这里的用户名格式为 useraname@tenant_name,如果只写 username 而省略租户名,则默认使用 username@sys 连接。sys 租户不允许被当作 meta 租户使用。</blockquote> |
| jdbc_password | 必选 | 连接 meta 租户的用户密码。 |
| port | 可选 | 访问 OCP Express 的 HTTP 端口。 |
| cluster_name | 必选 | OceanBase 集群的集群名称,需和 oceanbase-ce 组件中的 `appname` 配置项相同。 |
| ob_cluster_id | 必选 | OceanBase 集群的集群 ID,需和 oceanbase-ce 组件中的 `cluster_id` 配置项相同。 |
| root_sys_password | 必选 | OceanBase 集群中 root@sys 用户的密码。 |
| agent_username | 必选 | OBAgent 的用户名。 |
| agent_password | 必选 | OBAgent 的密码。 |
| logging_file_total_size_cap | 可选 | 日志文件总大小,默认为 1GB。<blockquote>**注意**</br>该参数的单位需使用 GB 或 MB,若使用 G 或 M 单位会报错,无法成功部署 OCP Express。</blockquote> |
| server_addresses->address | 必选 | OBServer 节点的 IP 地址。 |
| server_addresses->svrPort | 必选 | OBServer 节点的 rpc 端口,需和 oceanbase-ce 组件中对应节点的 `rpc_port` 配置项相同。 |
| server_addresses->sqlPort | 必选 | OBServer 节点的 mysql 端口,需和 oceanbase-ce 组件中对应节点的 `mysql_port` 配置项相同。 |
| server_addresses->agentMgrPort | 必选 | OBAgent 管理进程的端口,需根据 OBAgent 中实际设置的端口进行修改。 |
| server_addresses->agentMonPort | 必选 | OBAgent 监控进程的端口,需根据 OBAgent 中实际设置的端口进行修改。 |
4. 修改配置文件后,执行如下命令部署并启动集群。
```shell
# 部署集群
obd cluster deploy <deploy name> -c config.yaml
# 启动集群
obd cluster start <deploy name>
```
5. 集群启动后,根据输出的 `ocp-express` 的登录地址和账号密码登录 OCP Express,首次登录会提示修改密码,修改后按照修改的账号密码登录即可使用 OCP Express。
# 使用 OCP 接管 OBD 部署的集群
本文将以一个使用配置文件 distributed-example.yaml 启动的 test 部署为例,介绍如何使用 OCP 接管 OBD 部署的集群。
## 前提条件
- 请确保您安装的 OBD 版本在 V1.3.0 及以上。
- 请确保您安装的 OCP 版本在 V3.1.1-ce及以上。
## 修改 OceanBase 集群
### 检查是否满足条件
在使用 OCP 接管 OBD 部署的集群前,您需先使用如下命令检查是否满足接管条件。如条件不满足,则可以根据提示参考下文进行修改。
```shell
obd cluster check4ocp <deploy-name>
# 示例
obd cluster check4ocp test
```
有关 `obd cluster check4ocp` 命令的具体信息请参考 [obd cluster check4ocp](../3.obd-command/1.cluster-command-groups.md)
### 设置 IDC 信息
默认风格的配置文件不支持配置 IDC 信息,因此需要使用 OBD 1.3.0 版本的新功能,将配置文件风格转换成 cluster 风格。
您可使用如下命令进行转换:
```shell
obd cluster chst <deploy name> --style <STYLE> [-c/--components]
# 示例
obd cluster chst test -c oceanbase-ce --style cluster
```
有关 `obd cluster chst` 命令的具体信息请参考 [obd cluster chst](../3.obd-command/1.cluster-command-groups.md)
配置风格文件转换完成后,您需使用如下命令进入到编辑模式为 Zone 添加 IDC 信息。
```shell
obd cluster edit-config <deploy name>
# 示例
obd cluster edit-config test
```
有关 `obd cluster edit-config` 命令的具体信息请参考 [obd cluster edit-config](../3.obd-command/1.cluster-command-groups.md)
参考配置如下:
```yaml
## Only need to configure when remote login is required
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
oceanbase-ce:
style: cluster
config:
devname: eth0
memory_limit: 64G
system_memory: 30G
datafile_disk_percentage: 20
syslog_level: INFO
enable_syslog_wf: false
enable_syslog_recycle: true
max_syslog_file_count: 4
skip_proxy_sys_private_check: true
enable_strict_kernel_release: false
mysql_port: 2881
rpc_port: 2882
home_path: /root/observer
root_password: ********
zones:
zone1:
idc: idc1
servers:
- name: server1
ip: xxx.xxx.xxx.xxx
zone2:
idc: idc2
servers:
- name: server2
ip: xxx.xxx.xxx.xxx
zone3:
idc: idc3
servers:
- name: server3
ip: xxx.xxx.xxx.xxx
```
配置文件修改后,您需运行如下命令使改动生效。
```shell
obd cluster reload <deploy name>
# 示例
obd cluster reload test
```
有关 `obd cluster reload` 命令的具体信息请参考 [obd cluster reload](../3.obd-command/1.cluster-command-groups.md)
### 配置密码
使用 OCP 接管集群时需要填写 sys 租户下 root 用户连接集群的密码,您可使用如下命令编辑配置文件,并使用 `root_passwd` 来配置密码。
```shell
obd cluster edit-config <deploy name>
# 示例
obd cluster edit-config test
```
部分配置文件示例如下:
```yaml
## Only need to configure when remote login is required
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
oceanbase-ce:
servers:
- name: server1
# Please don't use hostname, only IP can be supported
ip: xxx.xxx.xxx.xxx
- name: server2
ip: xxx.xxx.xxx.xxx
- name: server3
ip: xxx.xxx.xxx.xxx
global:
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
home_path: /root/observer
# External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started.
mysql_port: 2881
# Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started.
rpc_port: 2882
# The maximum running memory for an observer. When ignored, autodeploy calculates this value based on the current server available resource.
memory_limit: 64G
# The reserved system memory. system_memory is reserved for general tenants. The default value is 30G. Autodeploy calculates this value based on the current server available resource.
system_memory: 30G
# Password for root. The default value is empty.
root_password: ********
# Password for proxyro. proxyro_password must be the same as observer_sys_password. The default value is empty.
# proxyro_password:
server1:
zone: zone1
server2:
zone: zone2
server3:
zone: zone3
```
上述为默认风格的配置文件示例,cluster 风格的配置文件请参考上文 **设置 IDC 信息** 中的配置示例。
配置文件修改后,您需运行如下命令使改动生效。
```shell
obd cluster reload <deploy name>
# 示例
obd cluster reload test
```
### 修改用户
OCP 要求进程必须是使用 admin 用户启动,且 admin 用户需要有免密 sudo 的权限。因此我们需要准备好可以免密 sudo 的 admin 用户。如果你已经满足此条件可以直接参考下文 **更改用户** 进行操作。
#### 创建用户
您可使用 root 用户参考如下操作在部署了 OceanBase 数据库的机器中创建 admin 用户。
```shell
# 创建用户组
groupadd admin
# 创建用户
useradd admin -g admin
```
创建 admin 用户后,您需为 admin 用户配置免密登录。有关如何配置免密 SSH 登录,详情请参考 [设置无密码 SSH 登录](https://www.oceanbase.com/docs/community-observer-cn-0000000000160095)
<main id="notice" type='notice'>
<h4>注意</h4>
<ol>
<li>
<p>您需要为 admin 用户配置 SSH 免密登录。</p>
</li>
<li>
<p>这里需要配置的为私钥,即 <code>id_rsa</code></p>
</li>
</ol>
</main>
#### 免密 sudo
以下操作请在 root 用户下进行:
```yaml
# 添加 sudoers 文件的写权限
chmod u+w /etc/sudoers
# vi /etc/sudoers
echo 'admin ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers
# 撤销 sudoers 文件写权限
chmod u-w /etc/sudoers
```
#### 更改用户
您可使用如下命令进入编辑模式修改 user 字段。
```shell
obd cluster edit-config <deploy name>
# 示例
obd cluster edit-config test
```
修改后的配置示例:
```yaml
## Only need to configure when remote login is required
user:
username: admin
# password: your password if need
key_file: your ssh-key file path if need # 设置为 admin 的 id_rsa 文件路径
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
```
修改配置文件后,您需使用如下命令使改动生效。
```shell
obd cluster restart <deploy name>
# 示例
obd cluster restart test --wp
```
有关 `obd cluster restart` 命令的具体信息请参考 [obd cluster restart](../3.obd-command/1.cluster-command-groups.md)
### 单机多 Server
OCP 要求一个台机器只能有一个 observer 进程,目前没有适配单机多 server 进程的场景。如果需要 OCP 接管单机多 server 进程的集群,您需手动停止其他的 observer 进程,保证一台机器只有一个 observer 进程 在运行。
<main id="notice" type='explain'>
<h4>说明</h4>
<p>上述所有操作完成后,您可再次执行 <code>obd cluster check4ocp <deploy name></code> 命令检查是否满足接管条件,若条件不满足,则可以根据提示进行修改。</p>
</main>
## 使用 OCP 接管集群
### 处理 proxyro 密码
在 OCP 接管集群之前,需要确认待接管集群中的 proxyro 用户的密码,如果该密码非默认值,则需将 OCP 中的 proxyro 密码修改为待接管集群中的 proxyro 用户的密码。
您可调用 OCP API 进行修改
```shell
curl --user user:pass -X POST "http://ocp-site-url:port/api/v2/obproxy/password" -H "Content-Type:application/json" -d '{"username":"proxyro","password":"*****"}'
```
说明:
- `user:pass` 分别为 OCP 的用户和密码,并且要求调用的用户需要有 admin 权限。
- `-d` 参数后面的 `password` 为待接管集群的 proxyro 用户的密码。
该操作会生成运维任务,将 OCP 中现有 OceanBase 集群的 proxyro 密码修改,同时修改 OBProxy 集群对应的配置。
您需等运维任务成功结束后才可进行后述步骤,如果任务失败,则需要重试并将任务执行成功之后才能执行后面的步骤。
![任务示例](https://obbusiness-private.oss-cn-shanghai.aliyuncs.com/doc/img/obd/V1.3.0/zh-CN/4.configuration-file-description-01.png)
### OCP 接管 OceanBase 集群
您可直接在 OCP 的白屏页面进行接管 OceanBase 集群的操作,具体步骤请参考 [接管集群](https://www.oceanbase.com/docs/community-ocp-cn-10000000000866526)
使用 OCP 接管 OceanBase 集群后,您需新建 OBProxy 集群并关联接管的 OceanBase 集群,具体步骤请参考 [创建 OBProxy 集群](https://www.oceanbase.com/docs/community-ocp-cn-10000000000866412)
如果原 ODP 使用了 VIP,可以将 OCP 上新建的 ODP 逐个添加到 VIP 中,再逐个从 VIP 中下线原 ODP。
### FAQ
1. 为什么要修改 OCP 中的 proxyro 账号的密码?
因为 OCP 中管理的 ODP 一般是通过 configserver 拉起的,设计上是可以连接多个 OceanBase 集群,但是 ODP 只能全局的修改 proxyro 的用户密码,所以这个配置在 OCP 中是一个全局的配置,proxyro 用户的密码仅用来让 ODP 查询一些元数据,修改并不影响业务租户。
2. 切换新的 ODP,可以复用原来的机器吗?
如果原来部署了多台 ODP 并且通过 VIP 来访问,可以逐个下线,并且用相同的机器在 OCP 中部署 ODP,再添加回 VIP,通过这种方式实现机器复用。
3. 不切换 ODP 是否可以?
可以的,原 ODP 仍然可以正常连接接管的 OceanBase 集群,但是还是建议用 OCP 新建 ODP 进行替换,方便以后的运维管理。
# 为现有集群增加白屏监控
OBD 自 V1.6.0 开始支持 prometheus 和 grafana 组件的部署,若您想为之前部署的集群新增白屏监控,可参考本文进行操作。
本文分为三个场景进行介绍,您可根据根据自身集群情况选择合适的场景参考。
<main id="notice" type='explain'>
<h4>说明</h4>
<p>本文所提供的配置示例仅供参考,详细配置及配置说明请切换到 <code>/usr/obd/example</code> 目录下查看对应组件的示例。</p>
</main>
## 场景一:现有集群未部署 OBAgent
当您现有集群未部署 OBAgent 时,若要增加白屏监控,需新建一个包含 obagent、prometheus 和 grafana 组件的集群。
其中,OBAgent 通过单独的配置指定用于采集 OceanBase 数据库的监控信息;在配置文件中声明 Prometheus 依赖于 obagent 组件,Grafana 依赖于 prometheus 组件。
具体配置文件示例如下:
```yaml
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
obagent:
servers:
# Please don't use hostname, only IP can be supported
- 192.168.1.2
- 192.168.1.3
- 192.168.1.4
global:
# The working directory for obagent. obagent is started under this directory. This is a required field.
home_path: /root/obagent
# The port that pulls and manages the metrics. The default port number is 8088.
server_port: 8088
# Debug port for pprof. The default port number is 8089.
pprof_port: 8089
# Log level. The default value is INFO.
log_level: INFO
# Log path. The default value is log/monagent.log.
log_path: log/monagent.log
# Encryption method. OBD supports aes and plain. The default value is plain.
crypto_method: plain
# Path to store the crypto key. The default value is conf/.config_secret.key.
# crypto_path: conf/.config_secret.key
# Size for a single log file. Log size is measured in Megabytes. The default value is 30M.
log_size: 30
# Expiration time for logs. The default value is 7 days.
log_expire_day: 7
# The maximum number for log files. The default value is 10.
log_file_count: 10
# Whether to use local time for log files. The default value is true.
# log_use_localtime: true
# Whether to enable log compression. The default value is true.
# log_compress: true
# Username for HTTP authentication. The default value is admin.
http_basic_auth_user: ******
# Password for HTTP authentication. The default value is root.
http_basic_auth_password: ******
# Username for debug service. The default value is admin.
pprof_basic_auth_user: ******
# Password for debug service. The default value is root.
pprof_basic_auth_password: ******
# Monitor username for OceanBase Database. The user must have read access to OceanBase Database as a system tenant. The default value is root.
monitor_user: root
# Monitor password for OceanBase Database. The default value is empty. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the root_password in oceanbase-ce.
monitor_password:
# The SQL port for observer. The default value is 2881. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the mysql_port in oceanbase-ce.
sql_port: 2881
# The RPC port for observer. The default value is 2882. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the rpc_port in oceanbase-ce.
rpc_port: 2882
# Cluster name for OceanBase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the appname in oceanbase-ce.
cluster_name: obcluster
# Cluster ID for OceanBase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the cluster_id in oceanbase-ce.
cluster_id: 1
# Monitor status for OceanBase Database. Active is to enable. Inactive is to disable. The default value is active. When you deploy an cluster automatically, OBD decides whether to enable this parameter based on depends.
ob_monitor_status: active
# Monitor status for your host. Active is to enable. Inactive is to disable. The default value is active.
host_monitor_status: active
# Whether to disable the basic authentication for HTTP service. True is to disable. False is to enable. The default value is false.
disable_http_basic_auth: false
# Whether to disable the basic authentication for the debug interface. True is to disable. False is to enable. The default value is false.
disable_pprof_basic_auth: false
# Synchronize the obagent-related information to the specified path of the remote host, as the targets specified by `file_sd_config` in the Prometheus configuration.
# For prometheus that depends on obagent, it can be specified to $home_path/targets of prometheus.
# For independently deployed prometheus, specify the files to be collected by setting `config` -> `scrape_configs` -> `file_sd_configs` -> `files`.For details, please refer to prometheus-only-example.yaml.
# target_sync_configs:
# - host: 192.168.1.1
# target_dir: /root/prometheus/targets
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
192.168.1.2:
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
zone_name: zone1
192.168.1.3:
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
zone_name: zone2
192.168.1.4:
# Zone name for your observer. The default value is zone1. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the zone name in oceanbase-ce.
zone_name: zone3
prometheus:
depends:
- obagent
servers:
- 192.168.1.5
global:
home_path: /root/prometheus
grafana:
depends:
- prometheus
servers:
- 192.168.1.5
global:
home_path: /root/grafana
login_password: *********
```
修改配置文件后,执行如下命令部署并启动新集群:
```shell
obd cluster deploy <new deploy name> -c new_config.yaml
obd cluster start <new deploy name>
```
集群启动后,根据展示信息访问 Grafana 页面,即可查看现有集群的监控信息。
## 场景二:现有集群已部署 OBAgent
当您现有集群已部署 OBAgent 时,若要增加白屏监控,需新建一个包含 prometheus 和 grafana 组件的集群。
此时,由于无法声明 depends obagent,所以需要手动进行关联。查看原有集群中 OBAgent 安装目录下 `conf/prometheus_config/prometheus.yaml` 文件,将对应的配置复制到新集群配置中 prometheus 组件的 `global` -> `conifg` 下。具体配置示例如下:
```yaml
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
prometheus:
servers:
- 192.168.1.5
global:
# The working directory for prometheus. prometheus is started under this directory. This is a required field.
home_path: /root/prometheus
config: # Configuration of the Prometheus service. The format is consistent with the Prometheus config file. Corresponds to the `config.file` parameter.
global:
scrape_interval: 1s
evaluation_interval: 10s
rule_files:
- "rules/*rules.yaml"
scrape_configs:
- job_name: prometheus
metrics_path: /metrics
scheme: http
static_configs:
- targets:
- 'localhost:9090'
- job_name: node
basic_auth:
username: ******
password: ******
metrics_path: /metrics/node/host
scheme: http
static_configs:
- targets:
- 192.168.1.2:8088
- job_name: ob_basic
basic_auth:
username: ******
password: ******
metrics_path: /metrics/ob/basic
scheme: http
static_configs:
- targets:
- 192.168.1.2:8088
- job_name: ob_extra
basic_auth:
username: ******
password: ******
metrics_path: /metrics/ob/extra
scheme: http
static_configs:
- targets:
- 192.168.1.2:8088
- job_name: agent
basic_auth:
username: ******
password: ******
metrics_path: /metrics/stat
scheme: http
static_configs:
- targets:
- 192.168.1.2:8088
grafana:
servers:
- 192.168.1.5
depends:
- prometheus
global:
home_path: /root/grafana
login_password: ********* # Grafana login password. The default value is 'oceanbase'.
```
<main id="notice" type='explain'>
<h4>说明</h4>
<p>上文配置文件示例中 <code>basic_auth</code> 配置项的用户名和密码需和 OBAgent 配置文件中 http_basic_auth_xxx 配置项相对应。</p>
</main>
修改配置文件后,执行如下命令部署新集群:
```shell
obd cluster deploy <new deploy name> -c new_config.yaml
```
部署完成后,将 OBAgent 安装目录下 `conf/prometheus_config/rules` 目录复制到 Prometheus 的安装目录下。
执行如下命令启动新集群:
```shell
obd cluster start <new deploy name>
```
集群启动后,根据展示信息访问 Grafana 页面,即可查看现有集群的监控信息。
<main id="notice" type='notice'>
<h4>注意</h4>
<ol>
<li>
<p><code>scrape_configs</code> 中的 Prometheus 监控项中 <code>'localhost:9090'</code> 需要根据当前 Prometheus 的监听地址进行修改,如果当前 Prometheus 开启了认证,也需要对应的配置 <code>basic_auth</code>。这里提到的监听地址为部署 Prometheus 的地址,即 Prometheus 配置中的 address 和 port 配置项。</p>
</li>
<li>
<p>如果原有集群的 obagent 节点有所变化,需通过 <code>obd cluster edit-config <new deploy name></code> 同步 OBAgent 安装目录下 <code>conf/prometheus_config/prometheus.yaml</code> 的内容。</p>
</li>
</ol>
</main>
## 场景三:监控多个集群并动态同步 OBAgent 变化
如若希望 Prometheus 可以采集多个集群的监控信息,或者动态同步 OBAgent 的变化,可在场景二的基础上做少许改动。
将 Prometheus 配置中的 `static_configs` 替换为 `file_sd_config` 来获取和同步 OBAgent 的节点信息。如下示例中表示收集 Prometheus 安装目录(`home_path`)下 targets 目录下的所有 yaml 文件。
<main id="notice" type='explain'>
<h4>说明</h4>
<p>Prometheus 安装目录下的 targets 目录需在原有集群配置文件中 obagent 组件下进行相关配置才会被创建。具体配置参见 <a href="#修改被监控集群配置">修改被监控集群配置</a></p>
</main>
```yaml
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
prometheus:
servers:
- 192.168.1.5
global:
# The working directory for prometheus. prometheus is started under this directory. This is a required field.
home_path: /root/prometheus
config: # Configuration of the Prometheus service. The format is consistent with the Prometheus config file. Corresponds to the `config.file` parameter.
global:
scrape_interval: 1s
evaluation_interval: 10s
rule_files:
- "rules/*rules.yaml"
scrape_configs:
- job_name: prometheus
metrics_path: /metrics
scheme: http
static_configs:
- targets:
- 'localhost:9090'
- job_name: node
basic_auth:
username: ******
password: ******
metrics_path: /metrics/node/host
scheme: http
file_sd_configs:
- files:
- targets/*.yaml
- job_name: ob_basic
basic_auth:
username: ******
password: ******
metrics_path: /metrics/ob/basic
scheme: http
file_sd_configs:
- files:
- targets/*.yaml
- job_name: ob_extra
basic_auth:
username: ******
password: ******
metrics_path: /metrics/ob/extra
scheme: http
file_sd_configs:
- files:
- targets/*.yaml
- job_name: agent
basic_auth:
username: ******
password: ******
metrics_path: /metrics/stat
scheme: http
file_sd_configs:
- files:
- targets/*.yaml
grafana:
servers:
- 192.168.1.5
depends:
- prometheus
global:
home_path: /root/grafana
login_password: ********* # Grafana login password. The default value is 'oceanbase'.
```
<main id="notice" type='explain'>
<h4>说明</h4>
<p>上文配置文件示例中 <code>basic_auth</code> 配置项的用户名和密码需和 OBAgent 配置文件中 http_basic_auth_xxx 配置项相对应。</p>
</main>
修改配置文件后,执行如下命令部署新集群:
```shell
obd cluster deploy <new deploy name> -c new_config.yaml
```
部署完成后,将 OBAgent 安装目录下的 `conf/prometheus_config/rules` 目录复制到 Prometheus 的安装目录下。
执行如下命令启动新集群:
```shell
obd cluster start <new deploy name>
```
部署好新集群后,根据展示信息访问 Grafana 页面,此时无法查看被监控集群的监控信息,还需修改被监控集群的 obagent 配置。
### 修改被监控集群配置
若要在 Prometheus 安装目录下创建 targets 目录,需执行 `obd cluster edit-config <deploy name>` 命令修改配置文件,在配置文件中增加配置项 `target_sync_configs` 指向 Prometheus 安装目录下的 targets 目录(默认会使用当前集群的 user 配置,如果有差别可以按照示例进行设置)。
```yaml
obagent:
servers:
# Please don't use hostname, only IP can be supported
- 192.168.1.2
- 192.168.1.3
- 192.168.1.4
global:
....
target_sync_configs:
- host: 192.168.1.5
target_dir: /root/prometheus/targets
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
...
```
修改配置文件后需根据提示重启集群,重启后访问 Grafana 页面,即可查看现有集群的监控信息。
<main id="notice" type='notice'>
<h4>注意</h4>
<ol>
<li>
<p><code>scrape_configs</code> 中的 Prometheus 监控项中 <code>'localhost:9090'</code> 需要根据当前 Prometheus 的监听地址进行修改,如果当前 Prometheus 开启了认证,也需要对应的配置 <code>basic_auth</code>。这里提到的监听地址为部署 Prometheus 的地址,即 Prometheus 配置中的 address 和 port 配置项。</p>
</li>
<li>
<p>被 Prometheus 采集的 OBAgent 的 http 用户名密码需要保持一致,如果不一致,请拆分采集项。</p>
</li>
</ol>
</main>
...@@ -114,23 +114,29 @@ obd cluster upgrade <deploy name> -c oceanbase-ce -V 3.1.2 --usable 7fafba0fac1e ...@@ -114,23 +114,29 @@ obd cluster upgrade <deploy name> -c oceanbase-ce -V 3.1.2 --usable 7fafba0fac1e
## 如何升级 obproxy 到 obproxy-ce 3.2.3? ## 如何升级 obproxy 到 obproxy-ce 3.2.3?
由于开源 OBProxy 组件在 V3.2.3 之后正式更名为 obproxy-ce,所以您需在 OBD 的执行用户下 [执行脚本](2.how-to-upgrade-obproxy-to-obproxy-ce-3.2.3.md) 修改 meta 信息。而后使用以下命令进行升级。 由于开源 obproxy 组件在 V3.2.3 之后正式更名为 obproxy-ce,所以您需在 OBD 的执行用户下 [执行脚本](2.how-to-upgrade-obproxy-to-obproxy-ce-3.2.3.md) 修改 meta 信息。而后使用以下命令进行升级。
```shell ```shell
obd cluster upgrade <deploy name> -c obproxy-ce -V 3.2.3 obd cluster upgrade <deploy name> -c obproxy-ce -V 3.2.3
``` ```
OBD 从 V1.3.0 开始仅支持使用 obproxy-ce 的组件名部署 V3.2.3 及之后版本的 OBProxy。但若您是使用 `obd update` 命令将 OBD 从低版本升级到 V1.3.0 及以上版本,仍支持使用 obproxy 组件名安装 V3.2.3 之前版本的 OBProxy(即:OBD 从 V1.3.0 开始不再提供 obproxy 插件库,但如果本地插件库中存在 obproxy 的插件库,则会被保留)。 OBD 从 V1.3.0 开始仅支持使用 obproxy-ce 的组件名部署 V3.2.3 及之后版本的 ODP。但若您是使用 `obd update` 命令将 OBD 从低版本升级到 V1.3.0 及以上版本,仍支持使用 obproxy 组件名安装 V3.2.3 之前版本的 ODP(即:OBD 从 V1.3.0 开始不再提供 obproxy 插件库,但如果本地插件库中存在 obproxy 的插件库,则会被保留)。
> **说明** <main id="notice" type='explain'>
> <h4>说明</h4>
> * 如果 OBD 升级后发现旧插件无法使用,可直接通过 RPM 包安装旧版本 OBD 进行覆盖。 <ul>
> <li>
> * 如果您安装的是新版本 OBD,但想使用 obproxy,也可安装 V1.3.0 之前版本的 OBD,在完成 obproxy 的部署后执行 `obd update` 命令升级 OBD,或安装新版本的 OBD 进行覆盖。 <p>如果 OBD 升级后发现旧插件无法使用,可直接通过 RPM 包安装旧版本 OBD 进行覆盖。</p>
</li>
<li>
<p>如果您安装的是新版本 OBD,但想使用 obproxy,也可安装 V1.3.0 之前版本的 OBD,在完成 obproxy 的部署后执行 <code>obd update</code> 命令升级 OBD,或安装新版本的 OBD 进行覆盖。</p>
</li>
</ul>
</main>
## 使用 OBD 升级 OBProxy 出现异常如何解决? ## 使用 OBD 升级 ODP 出现异常如何解决?
若您在升级 OBProxy 过程中出现如下问题: 若您在升级 ODP 过程中出现如下问题:
```bash ```bash
Stop obproxy ok Stop obproxy ok
...@@ -139,7 +145,7 @@ obproxy program health check ok ...@@ -139,7 +145,7 @@ obproxy program health check ok
Connect to obproxy x Connect to obproxy x
``` ```
即 OBD 机器无法连接 OBProxy,可能原因有以下两种: 即 OBD 机器无法连接 ODP,可能原因有以下两种:
1. proxysys 禁用了非 127.0.0.1 的 IP 访问,导致 OBD 所在的机器不能建连,这种情况下请先执行如下命令连接到 proxysys: 1. proxysys 禁用了非 127.0.0.1 的 IP 访问,导致 OBD 所在的机器不能建连,这种情况下请先执行如下命令连接到 proxysys:
...@@ -147,9 +153,10 @@ Connect to obproxy x ...@@ -147,9 +153,10 @@ Connect to obproxy x
obclient -h<obproxy_ip> -uroot@proxysys -P<obproxy_post> -p<obproxy_pwd> obclient -h<obproxy_ip> -uroot@proxysys -P<obproxy_post> -p<obproxy_pwd>
``` ```
> **说明** <main id="notice" type='explain'>
> <h4>说明</h4>
> 若您在连接 proxysys 时,使用自己设置的 proxysys 密码无法连接,请尝试将密码设置为空或者 `proxysys` 进行连接。 <p>若您在连接 proxysys 时,使用自己设置的 proxysys 密码无法连接,请尝试将密码设置为空或者 <code>proxysys</code> 进行连接。</p>
</main>
之后执行 `alter proxyconfig set skip_proxy_sys_private_check = true` 命令。 之后执行 `alter proxyconfig set skip_proxy_sys_private_check = true` 命令。
...@@ -159,16 +166,51 @@ Connect to obproxy x ...@@ -159,16 +166,51 @@ Connect to obproxy x
若排查后发现不是由上述两条原因引起的异常,您可到官网 [问答区](https://open.oceanbase.com/answer) 进行提问,会有专业人员为您解答。 若排查后发现不是由上述两条原因引起的异常,您可到官网 [问答区](https://open.oceanbase.com/answer) 进行提问,会有专业人员为您解答。
## OBD 升级后无法启动 OBProxy 服务如何解决? ## OBD 升级后无法启动 ODP 服务如何解决?
OBD 升级后会初始化 OBProxy 的密码,若您设置了 `obproxy_sys_password`,则需执行如下命令连接到 proxysys: OBD 升级后会初始化 ODP 的密码,若您设置了 `obproxy_sys_password`,则需执行如下命令连接到 proxysys:
```bash ```bash
obclient -h<obproxy_ip> -uroot@proxysys -P<obproxy_post> -p<obproxy_pwd> obclient -h<obproxy_ip> -uroot@proxysys -P<obproxy_post> -p<obproxy_pwd>
``` ```
> **说明** <main id="notice" type='explain'>
> <h4>说明</h4>
> 若您连接 proxysys 时,使用自己设置的 proxysys 命令无法连接,请尝试使用空密码或者 `proxysys` 进行连接。 <p>若您连接 proxysys 时,使用自己设置的 proxysys 命令无法连接,请尝试使用空密码或者 <code>proxysys</code> 进行连接。</p>
</main>
之后使用命令 `alter proxyconfig set obproxy_sys_password = ''` 将 proxysys 的密码设置为空,或者使其与配置文件中 `obproxy_sys_password` 的密码保持一致。 之后使用命令 `alter proxyconfig set obproxy_sys_password = ''` 将 proxysys 的密码设置为空,或者使其与配置文件中 `obproxy_sys_password` 的密码保持一致。
## 部署 OCP Express 前如何配置 Java 环境?
您可登录到需部署 OCP Express 的机器,根据该机器的联网情况,参考如下方式为 OCP Express 安装 Java 环境。
### 在线安装
在 CentOS 或 RedHat 系统上,执行如下命令安装:
```bash
sudo yum install java-1.8.0-openjdk
```
在 Ubuntu 或 Debian 系统上,执行如下命令安装:
```bash
sudo apt-get update
sudo apt-get install openjdk-8-jre
```
### 离线安装
1. 单击部署所需安装包 [x86_64 架构](https://github.com/dragonwell-project/dragonwell8/releases/download/dragonwell-extended-8.14.15_jdk8u362-ga/Alibaba_Dragonwell_Extended_8.14.15_x64_linux.tar.gz)[arm 架构](https://github.com/alibaba/dragonwell8/releases/download/dragonwell-extended-8.14.15_jdk8u362-ga/Alibaba_Dragonwell_Extended_8.14.15_aarch64_linux.tar.gz)
2. 将下载的安装包上传至对应机器,并在安装包所在目录下执行如下命令解压安装。
```bash
# 解压资源包
tar -zxvf Alibaba_Dragonwell_Extended_8*.tar.gz
# 进入解压后的目录
cd dragonwell*
# 创建软连
ln -s `pwd`/bin/java /usr/bin/java
```
# 如何升级 obproxy 到 obproxy-ce 3.2.3 # 如何升级 obproxy 到 obproxy-ce 3.2.3
由于开源 OBProxy 组件正式更名为 obproxy-ce,因此使用以下命令升级会报 `No such package obproxy-3.2.3` 错误。 由于开源 obproxy 组件正式更名为 obproxy-ce,因此使用以下命令升级会报 `No such package obproxy-3.2.3` 错误。
```shell ```shell
obd cluster upgrade <deploy name> -c obproxy -V 3.2.3 obd cluster upgrade <deploy name> -c obproxy -V 3.2.3
``` ```
您需在 OBD 的执行用户下执行下述 **脚本** 修改 meta 信息,而后使用以下命令对 OBProxy 进行升级。 您需在 OBD 的执行用户下执行下述 **脚本** 修改 meta 信息,而后使用以下命令对 ODP 进行升级。
```shell ```shell
obd cluster upgrade <deploy name> -c obproxy-ce -V 3.2.3 obd cluster upgrade <deploy name> -c obproxy-ce -V 3.2.3
...@@ -14,7 +14,7 @@ obd cluster upgrade <deploy name> -c obproxy-ce -V 3.2.3 ...@@ -14,7 +14,7 @@ obd cluster upgrade <deploy name> -c obproxy-ce -V 3.2.3
## 脚本 ## 脚本
```bash ```shell
OBD_HOME=${OBD_HOME:-${HOME}}/.obd OBD_HOME=${OBD_HOME:-${HOME}}/.obd
obproxy_repository=${OBD_HOME}/repository/obproxy obproxy_repository=${OBD_HOME}/repository/obproxy
obproxy_ce_repository=${OBD_HOME}/repository/obproxy-ce obproxy_ce_repository=${OBD_HOME}/repository/obproxy-ce
......
...@@ -8,13 +8,33 @@ ...@@ -8,13 +8,33 @@
错误原因:配置文件中存在端口冲突。 错误原因:配置文件中存在端口冲突。
解决方法:请您检查配置并进行修改。 解决方法:您可使用 obd cluster edit-config 命令打开配置文件,查看端口配置并进行修改。
### OBD-1001:x.x.x.x:xxx port is already used ### OBD-1001:x.x.x.x:xxx port is already used
错误原因:端口已经被占用。 错误原因:端口已经被占用。
解决方法:请您检查配置并更换端口。 解决方法:请您检查配置并更换端口,您可根据自身情况选择以下任一方式。
- 方法一:若您使用配置文件部署,使用 `obd cluster edit-config` 命令修改配置文件中对应的端口配置。修改完成后继续执行 `obd cluster start` 命令部署即可。
<main id="notice" type='explain'>
<h4>说明</h4>
<p>方法一中提到的命令详细介绍可参考 <a href='3.obd-command/1.cluster-command-groups.md'> 集群命令组</a></p>
</main>
- 方法二:若您使用 `obd demo` 命令部署,可通过如下命令指定端口,此处以指定 oceanbase-ce 组件的 mysql_port 为例。
```shell
obd demo --oceanbase-ce.mysql_port=3881
```
<main id="notice" type='explain'>
<h4>说明</h4>
<p>方法二中提到的命令详细介绍可参考 <a href='3.obd-command/0.obd-demo.md'> 快速部署命令</a></p>
</main>
- 方法三:若您通过 OBD 白屏界面部署,可在 **集群配置** 界面修改对应的端口。
### OBD-1002:Fail to init x.x.x.x path ### OBD-1002:Fail to init x.x.x.x path
...@@ -96,7 +116,93 @@ ...@@ -96,7 +116,93 @@
错误原因:ulimits 配置不满足要求。 错误原因:ulimits 配置不满足要求。
解决办法:可通过修改 /etc/security/limits.d/ 目录下对应文件和 /etc/security/limits.conf 使其满足要求。 解决办法:可通过修改 `/etc/security/limits.d/` 目录下对应文件和 `/etc/security/limits.conf` 使其满足要求。
### OBD-1008:(x.x.x.x) failed to get fs.aio-max-nr and fs.aio-nr
错误原因:OBD 获取不到服务器上 aio 配置。
解决办法:请检查当前用户是否有权限查看 fs.aio-max-nr/fs.aio-nr。
```bash
cat /proc/sys/fs/aio-max-nr /proc/sys/fs/aio-nr
```
### OBD-1009:x.x.x.x xxx need config: xxx
错误原因:服务相关组件缺少对应配置。
解决办法:执行如下命令打开配置文件,并在配置文件中添加所提示的配置项,修改后根据输出执行对应重启命令。
```bash
obd cluster edit-config <deploy_name>
```
### OBD-1010:x.x.x.x No such net interface: xxx
错误原因:
1. 黑屏端获取不到 devname。
2. 白屏端获取不到 devname。
解决办法:
对于情况 1,执行如下命令打开配置文件,在配置文件中添加或修改 devname,修改后根据输出执行对应重启命令。
```bash
obd cluster edit-config <deploy_name>
```
对于情况 2,可在白屏界面 **集群配置** -> **更多配置** 中设置 devname。
### OBD-1011:(x.x.x.x) Insufficient AIO remaining (Avail: xxx, Need: xxx), The recommended value of fs.aio-max-nr is 1048576
错误原因:系统可用 aio 数量少于数据库需要的 aio 数量。
解决办法:执行如下命令修改 linux aio-max-nr。
```bash
echo 1048576 > /proc/sys/fs/aio-max-nr
```
### OBD-1012:xxx
错误原因:
1. 类型转换异常,如 int 型参数传入字符串。
2. 参数值超限,如 `rpc_port` 的取值区间是 1025~65535,则 `rpc_port` 配置的值不在该区间就会报错。
3. 参数缺失,如关键参数如 `home_path` 未配置。
解决办法:
对于情况 1,请您检查参数类型并修改。
对于情况 2,请您检查传参值并修改。
对于情况 3,请您检查传参配置,若存在参数缺失需配置对应参数。
### OBD-1013:xxx@x.x.x.x connect failed: xxx
错误原因:出现该报错的原因有很多,常见的原因有以下两种。
1. 用户名或密码错误。
2. 连接超时。
解决办法:
对于情况 1,执行如下命令打开配置文件,在配置文件中添加或修改用户名和密码,修改后根据输出执行对应重启命令。
```bash
obd cluster edit-config <deploy_name>
```
对于情况 2,检查服务器相应配置,如端口是否准确,防火墙是否开启。
若排查后发现并非以上两种原因导致,您可到官网 [问答区](https://ask.oceanbase.com/) 进行提问,会有专业人员为您解答。
## OceanBase 部署相关报错 ## OceanBase 部署相关报错
...@@ -112,15 +218,16 @@ echo 3 > /proc/sys/vm/drop_caches ...@@ -112,15 +218,16 @@ echo 3 > /proc/sys/vm/drop_caches
如果内存仍然不足请通过 `edit-config` 调整 `memory_limt``system_memory`,通常情况下 `memory_limt/3 ≤ system_memory ≤ memory_limt/2` 如果内存仍然不足请通过 `edit-config` 调整 `memory_limt``system_memory`,通常情况下 `memory_limt/3 ≤ system_memory ≤ memory_limt/2`
> **注意** <main id="notice" type='notice'>
> <h4>注意</h4>
> `memory_limt` 不能低于 8G,即您的可用内存必须大于等于 8G。 <p><code>memory_limt</code> 不能低于 8G,即您的可用内存必须大于等于 8G。</p>
</main>
### OBD-2001:server can not migrate in ### OBD-2001:server can not migrate in
错误原因:可用的 unit 数小于 `--unit-num` 错误原因:可用的 Unit 数小于 `--unit-num`
解决方法:请您修改传入的 `--unit-num`。您可使用以下命令查看当前可用的 unit 数。 解决方法:请您修改传入的 `--unit-num`。您可使用以下命令查看当前可用的 Unit 数。
```sql ```sql
select count(*) num from oceanbase.__all_server where status = 'active' and start_service_time > 0 select count(*) num from oceanbase.__all_server where status = 'active' and start_service_time > 0
...@@ -150,9 +257,10 @@ select count(*) num from oceanbase.__all_server where status = 'active' and star ...@@ -150,9 +257,10 @@ select count(*) num from oceanbase.__all_server where status = 'active' and star
- 若您采用的是手动部署的方式,在不更改配置的情况下,要求磁盘使用率不能高于 64%。 - 若您采用的是手动部署的方式,在不更改配置的情况下,要求磁盘使用率不能高于 64%。
> **注意** <main id="notice" type='notice'>
> <h4>注意</h4>
> 在 redo_dir 和 data_dir 同盘的情况下,计算磁盘使用率时会算上 datafile 将要占用的空间。 <p>在 redo_dir 和 data_dir 同盘的情况下,计算磁盘使用率时会算上 datafile 将要占用的空间。</p>
</main>
### OBD-2004:Invalid: xxx is not a single server configuration item ### OBD-2004:Invalid: xxx is not a single server configuration item
...@@ -160,6 +268,98 @@ select count(*) num from oceanbase.__all_server where status = 'active' and star ...@@ -160,6 +268,98 @@ select count(*) num from oceanbase.__all_server where status = 'active' and star
解决方法:您可将需修改的配置改放到 global 下。 解决方法:您可将需修改的配置改放到 global 下。
### OBD-2005:Failed to register cluster. xxx may have been registered in xxx
错误原因:注册集群失败,或者该集群已经被注册。
解决办法:有以下三种解决办法。
- 执行 `obd cluster edit-config` 命令打开配置文件,将正确的 Config Server 配置给配置项 `obconfig_url`
- 若您确认 Config Server 正确并希望强制覆盖,可以在执行 `obd cluster start` 命令时加上 `-f` 参数覆盖已注册的集群。
- 若您确认 Config Server 正确,也可以执行 `obd cluster edit-config` 命令打开配置文件,修改配置项 `appname``cluster_id`,更换为其他集群名进行部署。
### OBD-2006:x.x.x.x has more than one network interface. Please set `devname` for x.x.x.x
错误原因:
1. 黑屏端获取不到 devname。
2. 白屏端获取不到 devname。
解决办法:
对于情况 1,执行如下命令打开配置文件,在配置文件中添加或修改 devname,修改后根据输出执行对应重启命令。
```bash
obd cluster edit-config <deploy_name>
```
对于情况 2,可在白屏界面 **集群配置** -> **更多配置** 中设置 devname。
### OBD-2007:x.x.x.x xxx fail to ping x.x.x.x. Please check configuration `devname`
错误原因:机器之间相互 ping 不通。
解决办法:
1. 检查网卡配置是否与实际匹配。
2. 检查各个节点网络是否畅通。
### OBD-2008:Cluster clocks are out of sync
错误原因:集群之间时钟超时。
解决办法:同步各个服务器的时钟。
### OBD-2009:x.x.x.x: when production_mode is True, xxx can not be less then xxx
错误原因:当生产模式开启时,`__min_full_resource_pool_mem``memory_limit` 等配置项不能小于固定值。
解决办法:
- 部署非生产环境时,执行如下命令打开配置文件,修改配置项 `production_mode``False`,修改后根据输出执行对应重启命令。
```bash
obd cluster edit-config <deploy_name>
```
- 部署生产环境时, 执行如下命令打开配置文件,修改配置项 `__min_full_resource_pool_mem``memory_limit`,使其大于固定值,修改后根据输出执行对应重启命令。
```bash
obd cluster edit-config <deploy_name>
```
### OBD-2010:x.x.x.x: system_memory too large. system_memory must be less than memory_limit/memory_limit_percentage
错误原因:配置项 `system_memory` 配置过大,该配置项值必须小于 `memory_limit`/`memory_limit_percentage` * `total_memory`
解决办法:
1. 黑屏端:执行如下命令打开配置文件,修改配置项 `system_memory`,修改后根据输出执行对应重启命令。
```bash
obd cluster edit-config <deploy_name>
```
2. 白屏端:可在白屏界面 **集群配置** -> **更多配置** 中设置 `system_memory`
### OBD-2011:x.x.x.x: fail to get memory info.\nPlease configure 'memory_limit' manually in configuration file
错误原因:服务器获取不到内存信息。
解决办法:
1. 黑屏端:执行如下命令打开配置文件,配置 `memory_limit` 信息,修改后根据输出执行对应重启命令。
```bash
obd cluster edit-config <deploy_name>
```
2. 白屏端:可在白屏界面 **集群配置** -> **更多配置** 中设置 `memory_limit`
## 测试相关报错 ## 测试相关报错
### OBD-3000:parse cmd failed ### OBD-3000:parse cmd failed
...@@ -204,7 +404,7 @@ select count(*) num from oceanbase.__all_server where status = 'active' and star ...@@ -204,7 +404,7 @@ select count(*) num from oceanbase.__all_server where status = 'active' and star
如果上述方法均无法解决问题,请到官网 [问答区](https://ask.oceanbase.com/) 提问,会有专业人员为您解答。 如果上述方法均无法解决问题,请到官网 [问答区](https://ask.oceanbase.com/) 提问,会有专业人员为您解答。
## obagent 相关报错 ## OBAgent 相关报错
### OBD-4000:Fail to reload x.x.x.x ### OBD-4000:Fail to reload x.x.x.x
...@@ -229,3 +429,109 @@ select count(*) num from oceanbase.__all_server where status = 'active' and star ...@@ -229,3 +429,109 @@ select count(*) num from oceanbase.__all_server where status = 'active' and star
``` ```
- 登陆到目标机器,为当前账号赋予对应目录的写权限。 - 登陆到目标机器,为当前账号赋予对应目录的写权限。
## ODP 相关报错
### OBD-4100:x.x.x.x need config "rs_list" or "obproxy_config_server_url"
错误原因:服务器获取不到 rs_list/obproxy_config_server_url 信息。
解决办法:执行如下命令打开配置文件,添加或修改 rs_list/obproxy_config_server_url 配置项,修改后根据输出执行对应重启命令。
```bash
obd cluster edit-config <deploy name>
```
### OBD-4101:failed to start x.x.x.x obproxy: xxx
错误原因:启动 ODP 失败。
解决办法:需根据提示进一步分析。
## Grafana 相关报错
### OBD-4200:x.x.x.x grafana admin password should not be 'admin'
错误原因:grafana 组件 admin 用户的 password 不应该是 admin。
解决办法:执行如下命令打开配置文件,添加或修改 password 信息,修改后根据输出执行对应重启命令。
```bash
obd cluster edit-config <deploy name>
```
### OBD-4201:x.x.x.x grafana admin password length should not be less than 5
错误原因:grafana 组件 admin 用户的 password 长度不能小于 5 位。
解决办法:执行如下命令打开配置文件,添加或修改 password 信息,修改后根据输出执行对应重启命令。
```bash
obd cluster edit-config <deploy name>
```
## OCP Express 相关报错
### OBD-4300:x.x.x.x: failed to query java version, you may not have java installed
错误原因:OBD 获取不到服务器上 Java。
解决办法:
1. 安装 Java,详细步骤可参考 [常见问题](5.faq/1.faq.md)**部署 OCP Express 前如何配置 Java 环境**
2. 如果 Java 已经安装,可以通过配置 `java_bin` 来指定 Java 可执行文件的路径。
### OBD-4301:x.x.x.x: ocp-express need java with version xxx
错误原因:服务器上 Java 版本过低。
解决办法:安装提示版本的 Java,如果目标版本 Java 已经安装,可以通过配置 `java_bin` 来指定 Java 可执行文件的路径。
### OBD-4302:x.x.x.x not enough memory. (Free: xxx, Need: xxx)
错误原因:服务器上没有足够内存
解决办法:分为以下几种解决方法。
- 若机器本身内存不足,您需执行 `obd cluster edit-config` 命令打开配置文件,调小 `memory_limit` 配置;或者更换其他内存足够的机器
- 若是机器剩余内存资源不足,如果存在可以释放的 cached,您可以先使用以下命令尝试释放。
```shell
echo 3 > /proc/sys/vm/drop_caches
```
### OBD-4303:x.x.x.x xxx not enough disk space. (Avail: xxx, Need: xxx)
错误原因:服务器磁盘没有足够的空间。
解决办法:请您自行检查并清理磁盘。
### OBD-4304:OCP express xxx needs to use xxx with version xxx or above
错误原因:部署 ocp-express 组件需要使用对应版本的组件。
解决办法:执行如下命令打开配置文件,修改提示对应组件版本,修改后根据输出执行对应重启命令。
```bash
obd cluster edit-config <deploy_name>
```
### OBD-4305: There is not enough xxx for ocp meta tenant
错误原因:没有足够的日志磁盘、内存去创建 OCP meta 租户。
解决办法:
- 如果是白屏 **最大占用** 模式部署,或者黑屏使用 `obd cluster autodeploy` 命令部署的部署方式,建议尝试清理磁盘、内存后重试。
- 如果用户配置了集群规格,需要根据提示信息,调大 `oceanbase-ce` 组件的相应配置项。例如内存相关配置项 `memory_limit`/`memory_limit_percentage`、日志盘相关配置项 `log_disk_size`/`log_disk_percentage`
## SQL 相关报错
OBD-5000:sql execute failed
错误原因:SQL 执行失败。
解决办法:需根据具体情况确定解决办法。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册