提交 285390b6 编写于 作者: O oceanbase-admin

init push

上级
# remove spec
rm -f ob-deploy.spec
# push to oceanbase-ce-publish/obdeploy
rm -rf .git && git init
git remote add origin git@gitlab.alibaba-inc.com:oceanbase-ce-publish/obdeploy
git add -f . && git commit -m "init push"
git push -f origin master
\ No newline at end of file
nohup.out
*.pyc
*.pyo
build
dist
.vscode
.git
__pycache__
.idea/workspace.xml
此差异已折叠。
# OceanBase Deploy
<!--
#
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
#
-->
<!-- TODO: some badges here -->
**OceanBase Deploy** (简称 OBD)是 OceanBase 开源软件的安装部署工具。OBD 同时也是包管理器,可以用来管理 OceanBase 所有的开源软件。本文介绍如何安装 OBD、使用 OBD 和 OBD 的命令。
## 安装 OBD
您可以使用以下方式安装 OBD:
### 方案1: 使用 RPM 包(Centos 7 及以上)安装。
```shell
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo http://yum.tbsite.net/mirrors/oceanbase/OceanBase.repo
sudo yum install -y ob-deploy
source /etc/profile.d/obd.sh
```
### 方案2:使用源码安装。
使用源码安装 OBD 之前,请确认您已安装以下依赖:
- gcc
- python-devel
- openssl-devel
- xz-devel
- mysql-devel
Python2 使用以下命令安装:
```shell
pip install -r requirements.txt
sh build.sh
source /etc/profile.d/obd.sh
```
Python3 使用以下命令安装:
```shell
pip install -r requirements3.txt
sh build.sh
source /etc/profile.d/obd.sh
```
## 快速启动 OceanBase 数据库
安装 OBD 后,您可以使用 root 用户执行这组命令快速启动本地单节点 OceanBase 数据库。
在此之前您需要确认以下信息:
- 当前用户为 root。
- `2882``2883` 端口没有被占用。
- 您的机器内存应该不低于 8G。
- 您的机器 CPU 数目应该不低于 2。
> **说明:** 如果以上条件不满足,请移步[使用 OBD 启动 OceanBase 数据库集群](#使用-obd-启动-oceanbase-数据库集群)。
```shell
obd cluster deploy c1 -c ./example/mini-local-example.yaml
obd cluster start c1
# 使用 mysql 客户端链接到到 OceanBase 数据库。
mysql -h127.1 -uroot -P2883
```
## 使用 OBD 启动 OceanBase 数据库集群
按照以下步骤启动 OceanBase 数据库集群:
### 第 1 步. 选择配置文件
根据您的资源条件选择正确的配置文件:
#### 小规格开发模式
适用于个人设备(内存不低于 8G)。
- [本地单节点配置样例](./example/mini-local-example.yaml)
- [单节点配置样例](./example/mini-single-example.yaml)
- [三节点配置样例](./example/mini-distributed-example.yaml)
- [单节点 + ODP 配置样例](./example/mini-single-with-obproxy-example.yaml)
- [三节点 + ODP 配置样例](./example/mini-distributed-with-obproxy-example.yaml)
#### 专业开发模式
适用于高配置 ECS 或物理服务器(不低于 16 核 64G 内存)。
- [本地单节点配置样例](./example/local-example.yaml)
- [单节点配置样例](./example/single-example.yaml)
- [三节点配置样例](./example/distributed-example.yaml)
- [单节点 + ODP 配置样例](./example/single-with-obproxy-example.yaml)
- [三节点 + ODP 配置样例](./example/distributed-with-obproxy-example.yaml)
本文以 [小规格开发模式-本地单节点](./example/mini-local-example.yaml) 为例,启动一个本地单节点的 OceanBase 数据库。
```shell
# 修改 home_path, 这是 OceanBase 数据库的工作目录。
# 修改 mysql_port,这是 OceanBase 数据库 SQL 服务协议端口号。后续将使用此端口连接数据库。
# 修改 rpc_port,这是 OceanBase 数据库集群内部通信的端口号。
vi ./example/mini-local-example.yaml
```
如果您的目标机器(OceanBase 数据库程序运行的机器)不是当前机器,请不要使用 `本地单节点配置样例`,改用其他样例。
同时您还需要修改配置文件顶部的用户密码信息。
```yaml
user:
username: <您的账号名>
password: <您的登录密码>
key_file: <您的私钥路径>
```
`username` 为登录到目标机器的用户名,确保您的用户名有 `home_path` 的写权限。`password``key_file`都是用于验证改用户的方式,通常情况下只需要填写一个。
> **注意:** 在配置秘钥路径后,如果您的秘钥不需要口令,请注释或者删掉`password`,以免`password`被视为秘钥口令用于登录,导致校验失败。
### 第 2 步. 部署和启动数据库
```shell
# 此命令会检查 home_path 和 data_dir 指向的目录是否为空。
# 若目录不为空,则报错。此时可以加上 -f 选项,强制清空。
obd cluster deploy lo -c local-example.yaml
# 此命令会检查系统参数 fs.aio-max-nr 是否不小于 1048576。
# 通常情况下一台机器启动一个节点不需要修改 fs.aio-max-nr。
# 当一台机器需要启动 4 个及以上的节点时,请务必修改 fs.aio-max-nr。
obd cluster start lo
```
### 第 3 步. 查看集群状态
```shell
# 参看obd管理的集群列表
obd cluster list
# 查看 lo 集群状态
obd cluster disply lo
```
### 第 4 步. 修改配置
OceanBase 数据库有数百个配置项,有些配置是耦合的,在您熟悉 OceanBase 数据库之前,不建议您修改示例配件文件中的配置。此处示例用来说明如何修改配置,并使之生效。
```shell
# 使用 edit-config 命令进入编辑模式,修改集群配置
obd cluster edit-config lo
# 修改 sys_bkgd_migration_retry_num 为 5
# 注意 sys_bkgd_migration_retry_num 值最小为 3
# 保存并退出后,obd 会告知您如何使得此次改动生效
# 此配置项仅需要 reload 即可生效
obd cluster reload lo
```
### 第 5 步. 停止集群
`stop` 命令用于停止一个运行中的集群。如果 `start` 命令执行失败,但有进程没有退出,请使用 `destroy` 命令。
```shell
obd cluster stop lo
```
### 第 6 步. 销毁集群
运行以下命令销毁集群:
```shell
# 启动集群时失败,可以能会有一些进程停留。
# 此时可用 -f 选项强制停止并销毁集群
obd cluster destroy lo
```
## 其他 OBD 命令
**OBD** 有多级命令,您可以在每个层级中使用 `-h/--help` 选项查看该子命令的帮助信息。
### 镜像和仓库命令组
#### `obd mirror clone`
将本地 RPM 包添加为镜像,之后您可以使用 **OBD 集群** 中相关的命令中启动镜像。
```shell
obd mirror clone <path> [-f]
```
参数 `path` 为 RPM 包的路径。
选项 `-f``--force``-f` 为可选选项。默认不开启。开启时,当镜像已经存在时,强制覆盖已有镜像。
#### `obd mirror create`
以本地目录为基础创建一个镜像。此命令主要用于使用 OBD 启动自行编译的 OceanBase 开源软件,您可以通过此命令将编译产物加入本地仓库,之后就可以使用 `obd cluster` 相关的命令启动它。
```shell
obd mirror create -n <component name> -p <your compile dir> -V <component version> [-t <tag>] [-f]
```
例如您根据 [OceanBase 编译指导书](https://open.oceanbase.com/docs/community/oceanbase-database/V3.1.0/get-the-oceanbase-database-by-using-source-code)编译成功后,可以使用 `make DESTDIR=./ install && obd mirror create -n oceanbase-ce -V 3.1.0 -p ./usr/local` 将编译产物加入OBD本地仓库。
选项说明见下表:
选项名 | 是否必选 | 数据类型 | 说明
--- | --- | --- |---
-n/--name | 是 | string | 组件名。如果您编译的是 OceanBase 数据库,则填写 oceanbase-ce。如果您编译的是 ODP,则填写 obproxy。
-p/--path | 是 | string | 编译目录。执行编译命令时的目录。OBD 会根据组件自动从该目录下获取所需的文件。
-V/--version | 是 | string | 版本号
-t/--tag | 否 | string | 镜像标签。您可以为您的创建的镜像定义多个标签,以英文逗号(,)间隔。
-f/--force | 否 | bool | 当镜像已存在,或者标签已存在时强制覆盖。默认不开启。
#### `obd mirror list`
显示镜像仓库或镜像列表
```shell
obd mirror list [mirror repo name]
```
参数 `mirror repo name` 为 镜像仓库名。该参数为可选参数。不填时,将显示镜像仓库列表。不为空时,则显示对应仓库的镜像列表。
#### `obd mirror update`
同步全部远程镜像仓库的信息
```shell
obd mirror update
```
### 集群命令组
OBD 集群命令操作的最小单位为一个部署配置。部署配置是一份 `yaml` 文件,里面包含各个整个部署的全部配置信息,包括服务器登录信息、组件信息、组件配置信息和组件服务器列表等。
在使用 OBD 启动一个集群之前,您需要先注册这个集群的部署配置到 OBD 中。您可以使用 `obd cluster edit-config` 创建一个空的部署配置,或使用 `obd cluster deploy -c config` 导入一个部署配置。
#### `obd cluster edit-config`
修改一个部署配置,当部署配置不存在时创建。
```shell
obd cluster edit-config <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster deploy`
根据配置部署集群。此命令会根据部署配置文件中组件的信息查找合适的镜像,并安装到本地仓库,此过程称为本地安装。
在将本地仓库中存在合适版本的组件分发给目标服务器,此过程称为远程安装。
在本地安装和远程安装时都会检查服务器是否存在组件运行所需的依赖。
此命令可以直接使用 OBD 中已注册的 `deploy name` 部署,也可以通过传入 `yaml` 的配置信息。
```shell
obd cluster deploy <deploy name> [-c <yaml path>] [-f] [-U]
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
选项说明见下表:
选项名 | 是否必选 | 数据类型 | 默认值 | 说明
--- | --- | --- |--- |---
-c/--config | 否 | string | 无 | 使用指定的 yaml 文件部署,并将部署配置注册到 OBD 中。<br>`deploy name` 存在时覆盖配置。<br>如果不使用该选项,则会根据 `deploy name` 查找已注册到OBD中的配置信息。
-f/--force | 否 | bool | false | 开启时,强制清空工作目录。<br>当组件要求工作目录为空且不使用改选项时,工作目录不为空会返回错误。
-U/--ulp/ --unuselibrepo | 否 | bool | false | 使用该选项将禁止 OBD 自动处理依赖。不开启的情况下,OBD 将在检查到缺失依赖时搜索相关的 libs 镜像并安装。使用该选项将会在对应的配置文件中天 **unuse_lib_repository: true**。也可以在配置文件中使用 **unuse_lib_repository: true** 开启。
#### `obd cluster start`
启动已部署的集群,成功时打印集群状态。
```shell
obd cluster start <deploy name> [-s]
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
选项 `-s``--strict-check`。部分组件在启动前会做相关的检查,当检查不通过的时候会报警告,不会强制停止流程。使用该选项可开启检查失败报错直接退出。建议开启,可以避免一些资源不足导致的启动失败。非必填项。数据类型为 `bool`。默认不开启。
#### `obd cluster list`
显示当前 OBD 内注册的全部集群(deploy name)的状态。
```shell
obd cluster list
```
#### `obd cluster display`
展示指定集群的状态。
```shell
obd cluster display <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster reload`
重载一个运行中集群。当您使用 edit-config 修改一个运行的集群的配置信息后,可以通过 `reload` 命令应用修改。
需要注意的是,并非全部的配置项都可以通过 `reload` 来应用。有些配置项需要重启集群,甚至是重部署集群才能生效。
请根据 edit-config 后返回的信息进行操作。
```shell
obd cluster reload <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster restart`
重启一个运行中集群。当您使用 edit-config 修改一个运行的集群的配置信息后,可以通过 `restart` 命令应用修改。
> **注意:** 并非所有的配置项都可以通过 `restart` 来应用。有些配置项需要重部署集群才能生效。
请根据 edit-config 后返回的信息进行操作。
```shell
obd cluster restart <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster redeploy`
重启一个运行中集群。当您使用 edit-config 修改一个运行的集群的配置信息后,可以通过 `redeploy` 命令应用修改。
> **注意:** 该命令会销毁集群,重新部署,您集群中的数据会丢失,请先做好备份。
```shell
obd cluster redeploy <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster stop`
停止一个运行中的集群。
```shell
obd cluster stop <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster destroy`
销毁已部署的集群。如果集群处于运行中的状态,该命令会先尝试执行`stop`,成功后再执行`destroy`
```shell
obd cluster destroy <deploy name> [-f]
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
选项 `-f``--force-kill`。检查到工作目录下有运行中的进程时,强制停止。销毁前会做检查是有还有进程在运行中。这些运行中的进程可能是 **start** 失败留下的,也可能是因为配置与其他集群重叠,进程是其他集群的。但无论是哪个原因导致工作目录下有进程未退出,**destroy** 都会直接停止。使用该选项会强制停止这些运行中的进程,强制执行 **destroy**。非必填项。数据类型为 `bool`。默认不开启。
### 测试命令组
#### `obd test mysqltest`
对 OcecanBase 数据库或 ODP 组件的指定节点执行 mysqltest。mysqltest 需要 OBClient,请先安装 OBClient。
```shell
obd test mysqltest <deploy name> [--test-set <test-set>] [flags]
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
选项说明见下表:
选项名 | 是否必选 | 数据类型 | 默认值 | 说明
--- | --- | --- |--- | ---
-c/--component | 否 | string | 默认为空 | 待测试的组件名。候选项为 oceanbase-ce 和 obproxy。为空时,按 obproxy、oceanbase-ce 的顺序进行检查。检查到组件存在则不再遍历,使用命中的组件进行后续测试。
--test-server | 否 | string | 默指定的组件下服务器中的第一个节点。 | 必须是指定的组件下的某个节点名。
--user | 否 | string | root | 执行测试的用户名。
---password | 否 | string | 默认为空 | 执行测试的用户密码。
--mysqltest-bin | 否 | string | mysqltest | 指定的路径不可执行时使用 OBD 自带的 mysqltest。
--obclient-bin | 否 | string | obclient | OBClient 二进制文件所在目录。
--test-dir | 否 | string | ./mysql_test/t | mysqltest 所需的 **test-file** 存放的目录。test 文件找不到时会尝试在 OBD 内置中查找。
--result-dir | 否 | string | ./mysql_test/r | mysqltest 所需的 **result-file** 存放的目录。result 文件找不到时会尝试在 OBD 内置中查找。
--tmp-dir | 否 | string | ./tmp | 为 mysqltest tmpdir 选项。
--var-dir | 否 | string | ./var | 将在该目录下创建log目录并作为 logdir 选项传入 mysqltest。
--test-set | 否 | string | 无 | test case 数组。多个数组使用英文逗号(,)间隔。
--test-pattern | 否 | string | 无| test 文件名匹配的正则表达式。所有匹配表达式的case将覆盖test-set选项。
--suite | 否 | string | 无 | suite 数组。一个 suite 下包含多个 test。可以使用英文逗号(,)间隔。使用该选项后 --test-pattern 和 --test-set 都将失效。
--suite-dir | 否 | string | ./mysql_test/test_suite | 存放 suite 目录的目录。suite 目录找不到时会尝试在 OBD 内置中查找。
--all | 否 | bool | false | 执行 --suite-dir 下全部的 case。存放 suite 目录的目录。
--need-init | 否 | bool | false | 执行init sql 文件。一个新的集群要执行 mysqltest 前可能需要执行一些初始化文件,比如创建 case 所需要的账号和租户等。存放 suite 目录的目录。默认不开启。
--init-sql-dir | 否 | string | ../ | init sql 文件所在目录。sql 文件找不到时会尝试在obd内置中查找。
--init-sql-files | 否 | string | | 需要 init 时执行的 init sql 文件数组。英文逗号(,)间隔。不填时,如果需要 init,OBD 会根据集群配置执行内置的 init。
--auto-retry | 否 | bool | false | 失败时自动重部署集群进行重试。
## Q&A
### Q: 如何指定使用组件的版本?
A: 在部署配置文件中使用版本声明。例如,如果您使用的是 OceanBase-CE 3.1.0 版本,可以指定以下配置:
```yaml
oceanbase-ce:
version: 3.1.0
```
### Q: 如何指定使用特定版本的组件?
A: 在部署配置文件中使用 package_hash 或 tag 声明。
如果您给自己编译的 OceanBase-CE 设置了t ag,您可以使用 tag 来指定。如:
```yaml
oceanbase-ce:
tag: my-oceanbase
```
您也可以通过 package_hash 来指定特定的版本。当您使用 `obd mirror` 相关命令时会打印出组件的 md5 值,这个值即为 package_hash。
```yaml
oceanbase-ce:
package_hash: 929df53459404d9b0c1f945e7e23ea4b89972069
```
### Q:我修改了 OceanBase-CE 了代码,需要修改启动流程怎么办?
A:您可以修改 `~/.obd/plugins/oceanbase-ce/` 下的启动相关插件。比如您为 3.1.0 版本的 OceanBase-CE 添加了一个新的启动配置,可以修改 ``~/.obd/plugins/oceanbase-ce/3.1.0/start.py``
## 协议
OBD 采用 [GPL-3.0](./LICENSE) 协议。
# OceanBase Deploy
<!--
#
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
#
-->
<!-- TODO: some badges here -->
**OceanBase Deploy** (简称 OBD)是 OceanBase 开源软件的安装部署工具。OBD 同时也是包管理器,可以用来管理 OceanBase 所有的开源软件。本文介绍如何安装 OBD、使用 OBD 和 OBD 的命令。
## 安装 OBD
您可以使用以下方式安装 OBD:
### 方案1: 使用 RPM 包(Centos 7 及以上)安装。
```shell
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo http://yum.tbsite.net/mirrors/oceanbase/OceanBase.repo
sudo yum install -y ob-deploy
source /etc/profile.d/obd.sh
```
### 方案2:使用源码安装。
使用源码安装 OBD 之前,请确认您已安装以下依赖:
- gcc
- python-devel
- openssl-devel
- xz-devel
- mysql-devel
Python2 使用以下命令安装:
```shell
pip install -r requirements.txt
sh build.sh
source /etc/profile.d/obd.sh
```
Python3 使用以下命令安装:
```shell
pip install -r requirements3.txt
sh build.sh
source /etc/profile.d/obd.sh
```
## 快速启动 OceanBase 数据库
安装 OBD 后,您可以使用 root 用户执行这组命令快速启动本地单节点 OceanBase 数据库。
在此之前您需要确认以下信息:
- 当前用户为 root。
- `2882``2883` 端口没有被占用。
- 您的机器内存应该不低于 8G。
- 您的机器 CPU 数目应该不低于 2。
> **说明:** 如果以上条件不满足,请移步[使用 OBD 启动 OceanBase 数据库集群](#使用-obd-启动-oceanbase-数据库集群)。
```shell
obd cluster deploy c1 -c ./example/mini-local-example.yaml
obd cluster start c1
# 使用 mysql 客户端链接到到 OceanBase 数据库。
mysql -h127.1 -uroot -P2883
```
## 使用 OBD 启动 OceanBase 数据库集群
按照以下步骤启动 OceanBase 数据库集群:
### 第 1 步. 选择配置文件
根据您的资源条件选择正确的配置文件:
#### 小规格开发模式
适用于个人设备(内存不低于 8G)。
- [本地单节点配置样例](./example/mini-local-example.yaml)
- [单节点配置样例](./example/mini-single-example.yaml)
- [三节点配置样例](./example/mini-distributed-example.yaml)
- [单节点 + ODP 配置样例](./example/mini-single-with-obproxy-example.yaml)
- [三节点 + ODP 配置样例](./example/mini-distributed-with-obproxy-example.yaml)
#### 专业开发模式
适用于高配置 ECS 或物理服务器(不低于 16 核 64G 内存)。
- [本地单节点配置样例](./example/local-example.yaml)
- [单节点配置样例](./example/single-example.yaml)
- [三节点配置样例](./example/distributed-example.yaml)
- [单节点 + ODP 配置样例](./example/single-with-obproxy-example.yaml)
- [三节点 + ODP 配置样例](./example/distributed-with-obproxy-example.yaml)
本文以 [小规格开发模式-本地单节点](./example/mini-local-example.yaml) 为例,启动一个本地单节点的 OceanBase 数据库。
```shell
# 修改 home_path, 这是 OceanBase 数据库的工作目录。
# 修改 mysql_port,这是 OceanBase 数据库 SQL 服务协议端口号。后续将使用此端口连接数据库。
# 修改 rpc_port,这是 OceanBase 数据库集群内部通信的端口号。
vi ./example/mini-local-example.yaml
```
如果您的目标机器(OceanBase 数据库程序运行的机器)不是当前机器,请不要使用 `本地单节点配置样例`,改用其他样例。
同时您还需要修改配置文件顶部的用户密码信息。
```yaml
user:
username: <您的账号名>
password: <您的登录密码>
key_file: <您的私钥路径>
```
`username` 为登录到目标机器的用户名,确保您的用户名有 `home_path` 的写权限。`password``key_file`都是用于验证改用户的方式,通常情况下只需要填写一个。
> **注意:** 在配置秘钥路径后,如果您的秘钥不需要口令,请注释或者删掉`password`,以免`password`被视为秘钥口令用于登录,导致校验失败。
### 第 2 步. 部署和启动数据库
```shell
# 此命令会检查 home_path 和 data_dir 指向的目录是否为空。
# 若目录不为空,则报错。此时可以加上 -f 选项,强制清空。
obd cluster deploy lo -c local-example.yaml
# 此命令会检查系统参数 fs.aio-max-nr 是否不小于 1048576。
# 通常情况下一台机器启动一个节点不需要修改 fs.aio-max-nr。
# 当一台机器需要启动 4 个及以上的节点时,请务必修改 fs.aio-max-nr。
obd cluster start lo
```
### 第 3 步. 查看集群状态
```shell
# 参看obd管理的集群列表
obd cluster list
# 查看 lo 集群状态
obd cluster disply lo
```
### 第 4 步. 修改配置
OceanBase 数据库有数百个配置项,有些配置是耦合的,在您熟悉 OceanBase 数据库之前,不建议您修改示例配件文件中的配置。此处示例用来说明如何修改配置,并使之生效。
```shell
# 使用 edit-config 命令进入编辑模式,修改集群配置
obd cluster edit-config lo
# 修改 sys_bkgd_migration_retry_num 为 5
# 注意 sys_bkgd_migration_retry_num 值最小为 3
# 保存并退出后,obd 会告知您如何使得此次改动生效
# 此配置项仅需要 reload 即可生效
obd cluster reload lo
```
### 第 5 步. 停止集群
`stop` 命令用于停止一个运行中的集群。如果 `start` 命令执行失败,但有进程没有退出,请使用 `destroy` 命令。
```shell
obd cluster stop lo
```
### 第 6 步. 销毁集群
运行以下命令销毁集群:
```shell
# 启动集群时失败,可以能会有一些进程停留。
# 此时可用 -f 选项强制停止并销毁集群
obd cluster destroy lo
```
## 其他 OBD 命令
**OBD** 有多级命令,您可以在每个层级中使用 `-h/--help` 选项查看该子命令的帮助信息。
### 镜像和仓库命令组
#### `obd mirror clone`
将本地 RPM 包添加为镜像,之后您可以使用 **OBD 集群** 中相关的命令中启动镜像。
```shell
obd mirror clone <path> [-f]
```
参数 `path` 为 RPM 包的路径。
选项 `-f``--force``-f` 为可选选项。默认不开启。开启时,当镜像已经存在时,强制覆盖已有镜像。
#### `obd mirror create`
以本地目录为基础创建一个镜像。此命令主要用于使用 OBD 启动自行编译的 OceanBase 开源软件,您可以通过此命令将编译产物加入本地仓库,之后就可以使用 `obd cluster` 相关的命令启动它。
```shell
obd mirror create -n <component name> -p <your compile dir> -V <component version> [-t <tag>] [-f]
```
例如您根据 [OceanBase 编译指导书](https://open.oceanbase.com/docs/community/oceanbase-database/V3.1.0/get-the-oceanbase-database-by-using-source-code)编译成功后,可以使用 `make DESTDIR=./ install && obd mirror create -n oceanbase-ce -V 3.1.0 -p ./usr/local` 将编译产物加入OBD本地仓库。
选项说明见下表:
选项名 | 是否必选 | 数据类型 | 说明
--- | --- | --- |---
-n/--name | 是 | string | 组件名。如果您编译的是 OceanBase 数据库,则填写 oceanbase-ce。如果您编译的是 ODP,则填写 obproxy。
-p/--path | 是 | string | 编译目录。执行编译命令时的目录。OBD 会根据组件自动从该目录下获取所需的文件。
-V/--version | 是 | string | 版本号
-t/--tag | 否 | string | 镜像标签。您可以为您的创建的镜像定义多个标签,以英文逗号(,)间隔。
-f/--force | 否 | bool | 当镜像已存在,或者标签已存在时强制覆盖。默认不开启。
#### `obd mirror list`
显示镜像仓库或镜像列表
```shell
obd mirror list [mirror repo name]
```
参数 `mirror repo name` 为 镜像仓库名。该参数为可选参数。不填时,将显示镜像仓库列表。不为空时,则显示对应仓库的镜像列表。
#### `obd mirror update`
同步全部远程镜像仓库的信息
```shell
obd mirror update
```
### 集群命令组
OBD 集群命令操作的最小单位为一个部署配置。部署配置是一份 `yaml` 文件,里面包含各个整个部署的全部配置信息,包括服务器登录信息、组件信息、组件配置信息和组件服务器列表等。
在使用 OBD 启动一个集群之前,您需要先注册这个集群的部署配置到 OBD 中。您可以使用 `obd cluster edit-config` 创建一个空的部署配置,或使用 `obd cluster deploy -c config` 导入一个部署配置。
#### `obd cluster edit-config`
修改一个部署配置,当部署配置不存在时创建。
```shell
obd cluster edit-config <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster deploy`
根据配置部署集群。此命令会根据部署配置文件中组件的信息查找合适的镜像,并安装到本地仓库,此过程称为本地安装。
在将本地仓库中存在合适版本的组件分发给目标服务器,此过程称为远程安装。
在本地安装和远程安装时都会检查服务器是否存在组件运行所需的依赖。
此命令可以直接使用 OBD 中已注册的 `deploy name` 部署,也可以通过传入 `yaml` 的配置信息。
```shell
obd cluster deploy <deploy name> [-c <yaml path>] [-f] [-U]
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
选项说明见下表:
选项名 | 是否必选 | 数据类型 | 默认值 | 说明
--- | --- | --- |--- |---
-c/--config | 否 | string | 无 | 使用指定的 yaml 文件部署,并将部署配置注册到 OBD 中。<br>`deploy name` 存在时覆盖配置。<br>如果不使用该选项,则会根据 `deploy name` 查找已注册到OBD中的配置信息。
-f/--force | 否 | bool | false | 开启时,强制清空工作目录。<br>当组件要求工作目录为空且不使用改选项时,工作目录不为空会返回错误。
-U/--ulp/ --unuselibrepo | 否 | bool | false | 使用该选项将禁止 OBD 自动处理依赖。不开启的情况下,OBD 将在检查到缺失依赖时搜索相关的 libs 镜像并安装。使用该选项将会在对应的配置文件中天 **unuse_lib_repository: true**。也可以在配置文件中使用 **unuse_lib_repository: true** 开启。
#### `obd cluster start`
启动已部署的集群,成功时打印集群状态。
```shell
obd cluster start <deploy name> [-s]
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
选项 `-s``--strict-check`。部分组件在启动前会做相关的检查,当检查不通过的时候会报警告,不会强制停止流程。使用该选项可开启检查失败报错直接退出。建议开启,可以避免一些资源不足导致的启动失败。非必填项。数据类型为 `bool`。默认不开启。
#### `obd cluster list`
显示当前 OBD 内注册的全部集群(deploy name)的状态。
```shell
obd cluster list
```
#### `obd cluster display`
展示指定集群的状态。
```shell
obd cluster display <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster reload`
重载一个运行中集群。当您使用 edit-config 修改一个运行的集群的配置信息后,可以通过 `reload` 命令应用修改。
需要注意的是,并非全部的配置项都可以通过 `reload` 来应用。有些配置项需要重启集群,甚至是重部署集群才能生效。
请根据 edit-config 后返回的信息进行操作。
```shell
obd cluster reload <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster restart`
重启一个运行中集群。当您使用 edit-config 修改一个运行的集群的配置信息后,可以通过 `restart` 命令应用修改。
> **注意:** 并非所有的配置项都可以通过 `restart` 来应用。有些配置项需要重部署集群才能生效。
请根据 edit-config 后返回的信息进行操作。
```shell
obd cluster restart <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster redeploy`
重启一个运行中集群。当您使用 edit-config 修改一个运行的集群的配置信息后,可以通过 `redeploy` 命令应用修改。
> **注意:** 该命令会销毁集群,重新部署,您集群中的数据会丢失,请先做好备份。
```shell
obd cluster redeploy <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster stop`
停止一个运行中的集群。
```shell
obd cluster stop <deploy name>
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
#### `obd cluster destroy`
销毁已部署的集群。如果集群处于运行中的状态,该命令会先尝试执行`stop`,成功后再执行`destroy`
```shell
obd cluster destroy <deploy name> [-f]
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
选项 `-f``--force-kill`。检查到工作目录下有运行中的进程时,强制停止。销毁前会做检查是有还有进程在运行中。这些运行中的进程可能是 **start** 失败留下的,也可能是因为配置与其他集群重叠,进程是其他集群的。但无论是哪个原因导致工作目录下有进程未退出,**destroy** 都会直接停止。使用该选项会强制停止这些运行中的进程,强制执行 **destroy**。非必填项。数据类型为 `bool`。默认不开启。
### 测试命令组
#### `obd test mysqltest`
对 OcecanBase 数据库或 ODP 组件的指定节点执行 mysqltest。mysqltest 需要 OBClient,请先安装 OBClient。
```shell
obd test mysqltest <deploy name> [--test-set <test-set>] [flags]
```
参数 `deploy name` 为部署配置名称,可以理解为配置文件名称。
选项说明见下表:
选项名 | 是否必选 | 数据类型 | 默认值 | 说明
--- | --- | --- |--- | ---
-c/--component | 否 | string | 默认为空 | 待测试的组件名。候选项为 oceanbase-ce 和 obproxy。为空时,按 obproxy、oceanbase-ce 的顺序进行检查。检查到组件存在则不再遍历,使用命中的组件进行后续测试。
--test-server | 否 | string | 默指定的组件下服务器中的第一个节点。 | 必须是指定的组件下的某个节点名。
--mode | 否 | string | both | 测试模式。候选项为 mysql、both。
--user | 否 | string | root | 执行测试的用户名。
---password | 否 | string | 默认为空 | 执行测试的用户密码。
--mysqltest-bin | 否 | string | mysqltest | 指定的路径不可执行时使用 OBD 自带的 mysqltest。
--obclient-bin | 否 | string | obclient | OBClient 二进制文件所在目录。
--test-dir | 否 | string | ./mysql_test/t | mysqltest 所需的 **test-file** 存放的目录。test 文件找不到时会尝试在 OBD 内置中查找。
--result-dir | 否 | string | ./mysql_test/r | mysqltest 所需的 **result-file** 存放的目录。result 文件找不到时会尝试在 OBD 内置中查找。
--tmp-dir | 否 | string | ./tmp | 为 mysqltest tmpdir 选项。
--var-dir | 否 | string | ./var | 将在该目录下创建log目录并作为 logdir 选项传入 mysqltest。
--test-set | 否 | string | 无 | test case 数组。多个数组使用英文逗号(,)间隔。
--test-pattern | 否 | string | 无| test 文件名匹配的正则表达式。所有匹配表达式的case将覆盖test-set选项。
--suite | 否 | string | 无 | suite 数组。一个 suite 下包含多个 test。可以使用英文逗号(,)间隔。使用该选项后 --test-pattern 和 --test-set 都将失效。
--suite-dir | 否 | string | ./mysql_test/test_suite | 存放 suite 目录的目录。suite 目录找不到时会尝试在 OBD 内置中查找。
--all | 否 | bool | false | 执行 --suite-dir 下全部的 case。存放 suite 目录的目录。
--need-init | 否 | bool | false | 执行init sql 文件。一个新的集群要执行 mysqltest 前可能需要执行一些初始化文件,比如创建 case 所需要的账号和租户等。存放 suite 目录的目录。默认不开启。
--init-sql-dir | 否 | string | ../ | init sql 文件所在目录。sql 文件找不到时会尝试在obd内置中查找。
--init-sql-files | 否 | string | | 需要 init 时执行的 init sql 文件数组。英文逗号(,)间隔。不填时,如果需要 init,OBD 会根据集群配置执行内置的 init。
--auto-retry | 否 | bool | false | 失败时自动重部署集群进行重试。
## Q&A
### Q: 如何指定使用组件的版本?
A: 在部署配置文件中使用版本声明。例如,如果您使用的是 OceanBase-CE 3.1.0 版本,可以指定以下配置:
```yaml
oceanbase-ce:
version: 3.1.0
```
### Q: 如何指定使用特定版本的组件?
A: 在部署配置文件中使用 package_hash 或 tag 声明。
如果您给自己编译的 OceanBase-CE 设置了t ag,您可以使用 tag 来指定。如:
```yaml
oceanbase-ce:
tag: my-oceanbase
```
您也可以通过 package_hash 来指定特定的版本。当您使用 `obd mirror` 相关命令时会打印出组件的 md5 值,这个值即为 package_hash。
```yaml
oceanbase-ce:
package_hash: 929df53459404d9b0c1f945e7e23ea4b89972069
```
### Q:我修改了 OceanBase-CE 了代码,需要修改启动流程怎么办?
A:您可以修改 `~/.obd/plugins/oceanbase-ce/` 下的启动相关插件。比如您为 3.1.0 版本的 OceanBase-CE 添加了一个新的启动配置,可以修改 ``~/.obd/plugins/oceanbase-ce/3.1.0/start.py``
## 协议
OBD 采用 [GPL-3.0](./LICENSE) 协议。
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
import os
import ctypes
import struct
_ppc64_native_is_best = True
# dict mapping arch -> ( multicompat, best personality, biarch personality )
multilibArches = { "x86_64": ( "athlon", "x86_64", "athlon" ),
"sparc64v": ( "sparcv9v", "sparcv9v", "sparc64v" ),
"sparc64": ( "sparcv9", "sparcv9", "sparc64" ),
"ppc64": ( "ppc", "ppc", "ppc64" ),
"s390x": ( "s390", "s390x", "s390" ),
}
if _ppc64_native_is_best:
multilibArches["ppc64"] = ( "ppc", "ppc64", "ppc64" )
arches = {
# ia32
"athlon": "i686",
"i686": "i586",
"geode": "i686",
"i586": "i486",
"i486": "i386",
"i386": "noarch",
# amd64
"x86_64": "athlon",
"amd64": "x86_64",
"ia32e": "x86_64",
#ppc64le
"ppc64le": "noarch",
# ppc
"ppc64p7": "ppc64",
"ppc64pseries": "ppc64",
"ppc64iseries": "ppc64",
"ppc64": "ppc",
"ppc": "noarch",
# s390{,x}
"s390x": "s390",
"s390": "noarch",
# sparc
"sparc64v": "sparcv9v",
"sparc64": "sparcv9",
"sparcv9v": "sparcv9",
"sparcv9": "sparcv8",
"sparcv8": "sparc",
"sparc": "noarch",
# alpha
"alphaev7": "alphaev68",
"alphaev68": "alphaev67",
"alphaev67": "alphaev6",
"alphaev6": "alphapca56",
"alphapca56": "alphaev56",
"alphaev56": "alphaev5",
"alphaev5": "alphaev45",
"alphaev45": "alphaev4",
"alphaev4": "alpha",
"alpha": "noarch",
# arm
"armv7l": "armv6l",
"armv6l": "armv5tejl",
"armv5tejl": "armv5tel",
"armv5tel": "noarch",
#arm hardware floating point
"armv7hnl": "armv7hl",
"armv7hl": "noarch",
# arm64
"arm64": "noarch",
# aarch64
"aarch64": "noarch",
# super-h
"sh4a": "sh4",
"sh4": "noarch",
"sh3": "noarch",
#itanium
"ia64": "noarch",
}
# Will contain information parsed from /proc/self/auxv via _parse_auxv().
# Should move into rpm really.
_aux_vector = {
"platform": "",
"hwcap": 0,
}
def _try_read_cpuinfo():
""" Try to read /proc/cpuinfo ... if we can't ignore errors (ie. proc not
mounted). """
try:
return open("/proc/cpuinfo", "r")
except:
return []
def _parse_auxv():
""" Read /proc/self/auxv and parse it into global dict for easier access
later on, very similar to what rpm does. """
# In case we can't open and read /proc/self/auxv, just return
try:
data = open("/proc/self/auxv", "rb").read()
except:
return
# Define values from /usr/include/elf.h
AT_PLATFORM = 15
AT_HWCAP = 16
fmtlen = struct.calcsize("LL")
offset = 0
platform = ctypes.c_char_p()
# Parse the data and fill in _aux_vector dict
while offset <= len(data) - fmtlen:
at_type, at_val = struct.unpack_from("LL", data, offset)
if at_type == AT_PLATFORM:
platform.value = at_val
_aux_vector["platform"] = platform.value
if at_type == AT_HWCAP:
_aux_vector["hwcap"] = at_val
offset = offset + fmtlen
def getCanonX86Arch(arch):
#
if arch == "i586":
for line in _try_read_cpuinfo():
if line.startswith("model name"):
if line.find("Geode(TM)") != -1:
return "geode"
break
return arch
# only athlon vs i686 isn't handled with uname currently
if arch != "i686":
return arch
# if we're i686 and AuthenticAMD, then we should be an athlon
for line in _try_read_cpuinfo():
if line.startswith("vendor") and line.find("AuthenticAMD") != -1:
return "athlon"
# i686 doesn't guarantee cmov, but we depend on it
elif line.startswith("flags"):
if line.find("cmov") == -1:
return "i586"
break
return arch
def getCanonARMArch(arch):
# the %{_target_arch} macro in rpm will let us know the abi we are using
try:
import rpm
target = rpm.expandMacro('%{_target_cpu}')
if target.startswith('armv7h'):
return target
except:
pass
return arch
def getCanonPPCArch(arch):
# FIXME: should I do better handling for mac, etc?
if arch != "ppc64":
return arch
machine = None
for line in _try_read_cpuinfo():
if line.find("machine") != -1:
machine = line.split(':')[1]
break
platform = _aux_vector["platform"]
if machine is None and not platform:
return arch
try:
if platform.startswith("power") and int(platform[5:].rstrip('+')) >= 7:
return "ppc64p7"
except:
pass
if machine is None:
return arch
if machine.find("CHRP IBM") != -1:
return "ppc64pseries"
if machine.find("iSeries") != -1:
return "ppc64iseries"
return arch
def getCanonSPARCArch(arch):
# Deal with sun4v, sun4u, sun4m cases
SPARCtype = None
for line in _try_read_cpuinfo():
if line.startswith("type"):
SPARCtype = line.split(':')[1]
break
if SPARCtype is None:
return arch
if SPARCtype.find("sun4v") != -1:
if arch.startswith("sparc64"):
return "sparc64v"
else:
return "sparcv9v"
if SPARCtype.find("sun4u") != -1:
if arch.startswith("sparc64"):
return "sparc64"
else:
return "sparcv9"
if SPARCtype.find("sun4m") != -1:
return "sparcv8"
return arch
def getCanonX86_64Arch(arch):
if arch != "x86_64":
return arch
vendor = None
for line in _try_read_cpuinfo():
if line.startswith("vendor_id"):
vendor = line.split(':')[1]
break
if vendor is None:
return arch
if vendor.find("Authentic AMD") != -1 or vendor.find("AuthenticAMD") != -1:
return "amd64"
if vendor.find("GenuineIntel") != -1:
return "ia32e"
return arch
def getCanonArch(skipRpmPlatform = 0):
if not skipRpmPlatform and os.access("/etc/rpm/platform", os.R_OK):
try:
f = open("/etc/rpm/platform", "r")
line = f.readline()
f.close()
(arch, vendor, opersys) = line.split("-", 2)
return arch
except:
pass
arch = os.uname()[4]
_parse_auxv()
if (len(arch) == 4 and arch[0] == "i" and arch[2:4] == "86"):
return getCanonX86Arch(arch)
if arch.startswith("arm"):
return getCanonARMArch(arch)
if arch.startswith("ppc"):
return getCanonPPCArch(arch)
if arch.startswith("sparc"):
return getCanonSPARCArch(arch)
if arch == "x86_64":
return getCanonX86_64Arch(arch)
canonArch = getCanonArch()
def isMultiLibArch(arch=None):
"""returns true if arch is a multilib arch, false if not"""
if arch is None:
arch = canonArch
if arch not in arches: # or we could check if it is noarch
return 0
if arch in multilibArches:
return 1
if arches[arch] in multilibArches:
return 1
return 0
def getBaseArch(myarch=None):
"""returns 'base' arch for myarch, if specified, or canonArch if not.
base arch is the arch before noarch in the arches dict if myarch is not
a key in the multilibArches."""
if not myarch:
myarch = canonArch
if myarch not in arches: # this is dumb, but <shrug>
return myarch
if myarch.startswith("sparc64"):
return "sparc"
elif myarch == "ppc64le":
return "ppc64le"
elif myarch.startswith("ppc64") and not _ppc64_native_is_best:
return "ppc"
elif myarch.startswith("arm64"):
return "arm64"
elif myarch.startswith("armv7h"):
return "armhfp"
elif myarch.startswith("arm"):
return "arm"
if isMultiLibArch(arch=myarch):
if myarch in multilibArches:
return myarch
else:
return arches[myarch]
if myarch in arches:
basearch = myarch
value = arches[basearch]
while value != 'noarch':
basearch = value
value = arches[basearch]
return basearch
def getArchList(thisarch=None):
# this returns a list of archs that are compatible with arch given
if not thisarch:
thisarch = canonArch
archlist = [thisarch]
while thisarch in arches:
thisarch = arches[thisarch]
archlist.append(thisarch)
# hack hack hack
# sparc64v is also sparc64 compat
if archlist[0] == "sparc64v":
archlist.insert(1,"sparc64")
# if we're a weirdo arch - add noarch on there.
if len(archlist) == 1 and archlist[0] == thisarch:
archlist.append('noarch')
return archlist
此差异已折叠。
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import os
import re
import getpass
from copy import deepcopy
from enum import Enum
from tool import ConfigUtil, FileUtil, YamlLoader
from _manager import Manager
from _repository import Repository
yaml = YamlLoader()
class UserConfig(object):
DEFAULT = {
'username': getpass.getuser(),
'password': None,
'key_file': None,
'port': 22,
'timeout': 30
}
def __init__(self, username=None, password=None, key_file=None, port=None, timeout=None):
self.username = username if username else self.DEFAULT['username']
self.password = password
self.key_file = key_file if key_file else self.DEFAULT['key_file']
self.port = port if port else self.DEFAULT['port']
self.timeout = timeout if timeout else self.DEFAULT['timeout']
class ServerConfig(object):
def __init__(self, ip, name=None):
self.ip = ip
self._name = name
@property
def name(self):
return self._name if self._name else self.ip
def __str__(self):
return '%s(%s)' % (self._name, self.ip) if self._name else self.ip
def __hash__(self):
return hash(self.__str__())
def __eq__(self, other):
if isinstance(other, self.__class__):
return self.ip == other.ip and self.name == other.name
if isinstance(other, dict):
return self.ip == other['ip'] and self.name == other['name']
class ServerConfigFlyweightFactory(object):
_CACHE = {}
@staticmethod
def get_instance(ip, name=None):
server = ServerConfig(ip, name)
_key = server.__str__()
if _key not in ServerConfigFlyweightFactory._CACHE:
ServerConfigFlyweightFactory._CACHE[_key] = server
return ServerConfigFlyweightFactory._CACHE[_key]
class ClusterConfig(object):
def __init__(self, servers, name, version, tag, package_hash):
self.version = version
self.tag = tag
self.name = name
self.package_hash = package_hash
self._temp_conf = {}
self._default_conf = {}
self._global_conf = {}
self._server_conf = {}
self._cache_server = {}
self.servers = servers
for server in servers:
self._server_conf[server] = {}
self._cache_server[server] = None
self._deploy_config = None
def __eq__(self, other):
if not isinstance(other, self.__class__):
return False
return self._global_conf == other._global_conf and self._server_conf == other._server_conf
def set_deploy_config(self, _deploy_config):
if self._deploy_config is None:
self._deploy_config = _deploy_config
return True
return False
def update_server_conf(self, server, key, value, save=True):
if self._deploy_config is None:
return False
if not self._deploy_config.update_component_server_conf(self.name, server, key, value, save):
return False
self._server_conf[server][key] = value
if self._cache_server[server] is not None:
self._cache_server[server][key] = value
return True
def update_global_conf(self, key, value, save=True):
if self._deploy_config is None:
return False
if not self._deploy_config.update_component_global_conf(self.name, key, value, save):
return False
self._global_conf[key] = value
for server in self._cache_server:
if self._cache_server[server] is not None:
self._cache_server[server][key] = value
return True
def get_unconfigured_require_item(self, server):
items = []
config = self.get_server_conf(server)
for key in self._default_conf:
if key in config:
continue
items.append(key)
return items
def get_server_conf_with_default(self, server):
config = {}
for key in self._temp_conf:
if self._temp_conf[key].default is not None:
config[key] = self._temp_conf[key].default
config.update(self.get_server_conf(server))
return config
def get_need_redeploy_items(self, server):
items = {}
config = self.get_server_conf(server)
for key in config:
if key in self._temp_conf and self._temp_conf[key].need_redeploy:
items[key] = config[key]
return items
def get_need_restart_items(self, server):
items = {}
config = self.get_server_conf(server)
for key in config:
if key in self._temp_conf and self._temp_conf[key].need_restart:
items[key] = config[key]
return items
def update_temp_conf(self, temp_conf):
self._default_conf = {}
self._temp_conf = temp_conf
for key in self._temp_conf:
if self._temp_conf[key].require:
self._default_conf[key] = self._temp_conf[key].default
self.set_global_conf(self._global_conf) # 更新全局配置
def set_global_conf(self, conf):
self._global_conf = deepcopy(self._default_conf)
self._global_conf.update(conf)
for server in self._cache_server:
self._cache_server[server] = None
def add_server_conf(self, server, conf):
if server not in self.servers:
self.servers.append(server)
self._server_conf[server] = conf
self._cache_server[server] = None
def get_global_conf(self):
return self._global_conf
def get_server_conf(self, server):
if server not in self._server_conf:
return None
if self._cache_server[server] is None:
conf = deepcopy(self._global_conf)
conf.update(self._server_conf[server])
self._cache_server[server] = conf
return self._cache_server[server]
class DeployStatus(Enum):
STATUS_CONFIGUREING = 'configuring'
STATUS_CONFIGURED = 'configured'
STATUS_DEPLOYING = 'delopying'
STATUS_DEPLOYED = 'deployed'
STATUS_RUNNING = 'running'
STATUS_STOPING = 'stoping'
STATUS_STOPPED = 'stopped'
STATUS_DESTROYING = 'destroying'
STATUS_DESTROYED = 'destroyed'
class DeployConfigStatus(Enum):
UNCHNAGE = 'unchange'
NEED_RELOAD = 'need reload'
NEED_RESTART = 'need restart'
NEED_REDEPLOY = 'need redeploy'
class DeployInfo(object):
def __init__(self, name, status, components={}, config_status=DeployConfigStatus.UNCHNAGE):
self.status = status
self.name = name
self.components = components
self.config_status = config_status
def __str__(self):
info = ['%s (%s)' % (self.name, self.status.value)]
for name in self.components:
info.append('%s-%s' % (name, self.components[name]))
return '\n'.join(info)
class DeployConfig(object):
def __init__(self, yaml_path, yaml_loader=yaml):
self._user = None
self.unuse_lib_repository = False
self.components = {}
self._src_data = None
self.yaml_path = yaml_path
self.yaml_loader = yaml_loader
self._load()
@property
def user(self):
return self._user
def set_unuse_lib_repository(self, status):
if self.unuse_lib_repository != status:
self.unuse_lib_repository = status
self._src_data['unuse_lib_repository'] = status
return self._dump()
return True
def _load(self):
try:
with open(self.yaml_path, 'rb') as f:
self._src_data = self.yaml_loader.load(f)
for key in self._src_data:
if key == 'user':
self.set_user_conf(UserConfig(
ConfigUtil.get_value_from_dict(self._src_data[key], 'username'),
ConfigUtil.get_value_from_dict(self._src_data[key], 'password'),
ConfigUtil.get_value_from_dict(self._src_data[key], 'key_file'),
ConfigUtil.get_value_from_dict(self._src_data[key], 'port', 0, int),
ConfigUtil.get_value_from_dict(self._src_data[key], 'timeout', 0, int),
))
elif key == 'unuse_lib_repository':
self.unuse_lib_repository = self._src_data['unuse_lib_repository']
else:
self._add_component(key, self._src_data[key])
except:
pass
if not self.user:
self.set_user_conf(UserConfig())
def _dump(self):
try:
with open(self.yaml_path, 'w') as f:
self.yaml_loader.dump(self._src_data, f)
return True
except:
pass
return False
def dump(self):
return self._dump()
def set_user_conf(self, conf):
self._user = conf
def update_component_server_conf(self, component_name, server, key, value, save=True):
if component_name not in self.components:
return False
cluster_config = self.components[component_name]
if server not in cluster_config.servers:
return False
component_config = self._src_data[component_name]
if server.name not in component_config:
component_config[server.name] = {key: value}
else:
component_config[server.name][key] = value
return self.dump() if save else True
def update_component_global_conf(self, component_name, key, value, save=True):
if component_name not in self.components:
return False
component_config = self._src_data[component_name]
if 'global' not in component_config:
component_config['global'] = {key: value}
else:
component_config['global'][key] = value
return self.dump() if save else True
def _add_component(self, component_name, conf):
if 'servers' in conf and isinstance(conf['servers'], list):
servers = []
for server in conf['servers']:
if isinstance(server, dict):
ip = ConfigUtil.get_value_from_dict(server, 'ip', transform_func=str)
name = ConfigUtil.get_value_from_dict(server, 'name', transform_func=str)
else:
ip = server
name = None
if not re.match('^\d{1,3}(\\.\d{1,3}){3}$', ip):
continue
server = ServerConfigFlyweightFactory.get_instance(ip, name)
if server not in servers:
servers.append(server)
else:
servers = []
cluster_conf = ClusterConfig(
servers,
component_name,
ConfigUtil.get_value_from_dict(conf, 'version', None, str),
ConfigUtil.get_value_from_dict(conf, 'tag', None, str),
ConfigUtil.get_value_from_dict(conf, 'package_hash', None, str)
)
if 'global' in conf:
cluster_conf.set_global_conf(conf['global'])
for server in servers:
if server.name in conf:
cluster_conf.add_server_conf(server, conf[server.name])
cluster_conf.set_deploy_config(self)
self.components[component_name] = cluster_conf
class Deploy(object):
DEPLOY_STATUS_FILE = '.data'
DEPLOY_YAML_NAME = 'config.yaml'
def __init__(self, config_dir, stdio=None):
self.config_dir = config_dir
self.name = os.path.split(config_dir)[1]
self._info = None
self._config = None
self.stdio = stdio
def use_model(self, name, repository, dump=True):
self.deploy_info.components[name] = {
'hash': repository.hash,
'version': repository.version,
}
return self._dump_deploy_info() if dump else True
@staticmethod
def get_deploy_file_path(path):
return os.path.join(path, Deploy.DEPLOY_STATUS_FILE)
@staticmethod
def get_deploy_yaml_path(path):
return os.path.join(path, Deploy.DEPLOY_YAML_NAME)
@staticmethod
def get_temp_deploy_yaml_path(path):
return os.path.join(path, 'tmp_%s' % Deploy.DEPLOY_YAML_NAME)
@property
def deploy_info(self):
if self._info is None:
try:
path = self.get_deploy_file_path(self.config_dir)
with open(path, 'rb') as f:
data = yaml.load(f)
self._info = DeployInfo(
data['name'],
getattr(DeployStatus, data['status'], DeployStatus.STATUS_CONFIGURED),
ConfigUtil.get_value_from_dict(data, 'components', {}),
getattr(DeployConfigStatus, ConfigUtil.get_value_from_dict(data, 'config_status', '_'), DeployConfigStatus.UNCHNAGE),
)
except:
self._info = DeployInfo(self.name, DeployStatus.STATUS_CONFIGURED)
return self._info
@property
def deploy_config(self):
if self._config is None:
try:
path = self.get_deploy_yaml_path(self.config_dir)
self._config = DeployConfig(path, YamlLoader(stdio=self.stdio))
deploy_info = self.deploy_info
for component_name in deploy_info.components:
if component_name not in self._config.components:
continue
config = deploy_info.components[component_name]
cluster_config = self._config.components[component_name]
if 'version' in config and config['version']:
cluster_config.version = config['version']
if 'hash' in config and config['hash']:
cluster_config.package_hash = config['hash']
except:
pass
return self._config
def apply_temp_deploy_config(self):
src_yaml_path = self.get_temp_deploy_yaml_path(self.config_dir)
target_src_path = self.get_deploy_yaml_path(self.config_dir)
try:
FileUtil.move(src_yaml_path, target_src_path)
self._config = None
self.update_deploy_config_status(DeployConfigStatus.UNCHNAGE)
return True
except Exception as e:
self.stdio and getattr(self.stdio, 'exception', print)('mv %s to %s failed, error: \n%s' % (src_yaml_path, target_src_path, e))
return False
def _dump_deploy_info(self):
path = self.get_deploy_file_path(self.config_dir)
self.stdio and getattr(self.stdio, 'verbose', print)('dump deploy info to %s' % path)
try:
with open(path, 'w') as f:
data = {
'name': self.deploy_info.name,
'components': self.deploy_info.components,
'status': self.deploy_info.status.name,
'config_status': self.deploy_info.config_status.name,
}
yaml.dump(data, f)
return True
except:
self.stdio and getattr(self.stdio, 'exception', print)('dump deploy info to %s failed' % path)
return False
def update_deploy_status(self, status):
if isinstance(status, DeployStatus):
self.deploy_info.status = status
if DeployStatus.STATUS_DESTROYED == status:
self.deploy_info.components = {}
return self._dump_deploy_info()
return False
def update_deploy_config_status(self, status):
if isinstance(status, DeployConfigStatus):
self.deploy_info.config_status = status
return self._dump_deploy_info()
return False
class DeployManager(Manager):
RELATIVE_PATH = 'cluster/'
def __init__(self, home_path, stdio=None):
super(DeployManager, self).__init__(home_path, stdio)
def get_deploy_configs(self):
configs = []
for file_name in os.listdir(self.path):
path = os.path.join(self.path, file_name)
if os.path.isdir(path):
configs.append(Deploy(path, self.stdio))
return configs
def get_deploy_config(self, name):
path = os.path.join(self.path, name)
if os.path.isdir(path):
return Deploy(path, self.stdio)
return None
def create_deploy_config(self, name, src_yaml_path):
config_dir = os.path.join(self.path, name)
target_src_path = Deploy.get_deploy_yaml_path(config_dir)
self._mkdir(config_dir)
if FileUtil.copy(src_yaml_path, target_src_path, self.stdio):
return Deploy(config_dir, self.stdio)
else:
self._rm(config_dir)
return None
def remove_deploy_config(self, name):
config_dir = os.path.join(self.path, name)
self._rm(config_dir)
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import os
from tool import DirectoryUtil
class Manager(object):
RELATIVE_PATH = ''
def __init__(self, home_path, stdio=None):
self.stdio = stdio
self.path = os.path.join(home_path, self.RELATIVE_PATH)
self.is_init = self._mkdir(self.path)
def _mkdir(self, path):
return DirectoryUtil.mkdir(path, stdio=self.stdio)
def _rm(self, path):
return DirectoryUtil.rm(path, self.stdio)
此差异已折叠。
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import os
import sys
from enum import Enum
from glob import glob
from copy import deepcopy
from _manager import Manager
from tool import ConfigUtil, DynamicLoading, YamlLoader
yaml = YamlLoader()
class PluginType(Enum):
START = 'StartPlugin'
PARAM = 'ParamPlugin'
INSTALL = 'InstallPlugin'
PY_SCRIPT = 'PyScriptPlugin'
class Plugin(object):
PLUGIN_TYPE = None
FLAG_FILE = None
def __init__(self, component_name, plugin_path, version):
if not self.PLUGIN_TYPE or not self.FLAG_FILE:
raise NotImplementedError
self.component_name = component_name
self.plugin_path = plugin_path
self.version = version.split('.')
def __str__(self):
return '%s-%s-%s' % (self.component_name, self.PLUGIN_TYPE.name.lower(), '.'.join(self.version))
@property
def mirror_type(self):
return self.PLUGIN_TYPE
class PluginReturn(object):
def __init__(self, value=False, *arg, **kwargs):
self._return_value = value
self._return_args = arg
self._return_kwargs = kwargs
def __nonzero__(self):
return self.__bool__()
def __bool__(self):
return True if self._return_value else False
@property
def value(self):
return self._return_value
@property
def args(self):
return self._return_args
@property
def kwargs(self):
return self._return_kwargs
def get_return(self, key):
if key in self.kwargs:
return self.kwargs[key]
return None
def set_args(self, *args):
self._return_args = args
def set_kwargs(self, **kwargs):
self._return_kwargs = kwargs
def set_return(self, value):
self._return_value = value
def return_true(self, *args, **kwargs):
self.set_return(True)
self.set_args(*args)
self.set_kwargs(**kwargs)
def return_false(self, *args, **kwargs):
self.set_return(False)
self.set_args(*args)
self.set_kwargs(**kwargs)
class PluginContext(object):
def __init__(self, components, clients, cluster_config, cmd, options, stdio):
self.components = components
self.clients = clients
self.cluster_config = cluster_config
self.cmd = cmd
self.options = options
self.stdio = stdio
self._return = PluginReturn()
def get_return(self):
return self._return
def return_true(self, *args, **kwargs):
self._return.return_true(*args, **kwargs)
def return_false(self, *args, **kwargs):
self._return.return_false(*args, **kwargs)
class SubIO(object):
def __init__(self, stdio):
self.stdio = getattr(stdio, 'sub_io', lambda: None)()
self._func = {}
def __del__(self):
self.before_close()
def _temp_function(self, *arg, **kwargs):
pass
def __getattr__(self, name):
if name not in self._func:
self._func[name] = getattr(self.stdio, name, self._temp_function)
return self._func[name]
class ScriptPlugin(Plugin):
class ClientForScriptPlugin(object):
def __init__(self, client, stdio):
self.client = client
self.stdio = stdio
def __getattr__(self, key):
def new_method(*args, **kwargs):
kwargs['stdio'] = self.stdio
return attr(*args, **kwargs)
attr = getattr(self.client, key)
if hasattr(attr, '__call__'):
return new_method
return attr
def __init__(self, component_name, plugin_path, version):
super(ScriptPlugin, self).__init__(component_name, plugin_path, version)
self.context = None
def __call__(self):
raise NotImplementedError
def _import(self, stdio=None):
raise NotImplementedError
def _export(self):
raise NotImplementedError
def __del__(self):
self._export()
def before_do(self, components, clients, cluster_config, cmd, options, stdio, *arg, **kwargs):
self._import(stdio)
sub_stdio = SubIO(stdio)
sub_clients = {}
for server in clients:
sub_clients[server] = ScriptPlugin.ClientForScriptPlugin(clients[server], sub_stdio)
self.context = PluginContext(components, sub_clients, cluster_config, cmd, options, sub_stdio)
def after_do(self, stdio, *arg, **kwargs):
self._export(stdio)
self.context = None
def pyScriptPluginExec(func):
def _new_func(self, components, clients, cluster_config, cmd, options, stdio, *arg, **kwargs):
self.before_do(components, clients, cluster_config, cmd, options, stdio, *arg, **kwargs)
if self.module:
method_name = func.__name__
method = getattr(self.module, method_name, False)
if method:
try:
method(self.context, *arg, **kwargs)
except Exception as e:
stdio and getattr(stdio, 'exception', print)('%s RuntimeError: %s' % (self, e))
pass
ret = self.context.get_return() if self.context else PluginReturn()
self.after_do(stdio, *arg, **kwargs)
return ret
return _new_func
class PyScriptPlugin(ScriptPlugin):
LIBS_PATH = []
PLUGIN_COMPONENT_NAME = None
def __init__(self, component_name, plugin_path, version):
if not self.PLUGIN_COMPONENT_NAME:
raise NotImplementedError
super(PyScriptPlugin, self).__init__(component_name, plugin_path, version)
self.module = None
self.libs_path = deepcopy(self.LIBS_PATH)
self.libs_path.append(self.plugin_path)
def __call__(self, clients, cluster_config, cmd, options, stdio, *arg, **kwargs):
method = getattr(self, self.PLUGIN_COMPONENT_NAME, False)
if method:
return method(clients, cluster_config, cmd, options, stdio, *arg, **kwargs)
else:
raise NotImplementedError
def _import(self, stdio=None):
if self.module is None:
DynamicLoading.add_libs_path(self.libs_path)
self.module = DynamicLoading.import_module(self.PLUGIN_COMPONENT_NAME, stdio)
def _export(self, stdio=None):
if self.module:
DynamicLoading.remove_libs_path(self.libs_path)
DynamicLoading.export_module(self.PLUGIN_COMPONENT_NAME, stdio)
# this is PyScriptPlugin demo
# class InitPlugin(PyScriptPlugin):
# FLAG_FILE = 'init.py'
# PLUGIN_COMPONENT_NAME = 'init'
# PLUGIN_TYPE = PluginType.INIT
# def __init__(self, component_name, plugin_path, version):
# super(InitPlugin, self).__init__(component_name, plugin_path, version)
# @pyScriptPluginExec
# def init(self, components, ssh_clients, cluster_config, cmd, options, stdio, *arg, **kwargs):
# pass
class ParamPlugin(Plugin):
class ConfigItem(object):
def __init__(self, name, default=None, require=False, need_restart=False, need_redeploy=False):
self.name = name
self.default = default
self.require = require
self.need_restart = need_restart
self.need_redeploy = need_redeploy
PLUGIN_TYPE = PluginType.PARAM
DEF_PARAM_YAML = 'parameter.yaml'
FLAG_FILE = DEF_PARAM_YAML
def __init__(self, component_name, plugin_path, version):
super(ParamPlugin, self).__init__(component_name, plugin_path, version)
self.def_param_yaml_path = os.path.join(self.plugin_path, self.DEF_PARAM_YAML)
self._src_data = None
@property
def params(self):
if self._src_data is None:
try:
self._src_data = {}
with open(self.def_param_yaml_path, 'rb') as f:
configs = yaml.load(f)
for conf in configs:
self._src_data[conf['name']] = ParamPlugin.ConfigItem(
conf['name'],
ConfigUtil.get_value_from_dict(conf, 'default', None),
ConfigUtil.get_value_from_dict(conf, 'require', False),
ConfigUtil.get_value_from_dict(conf, 'need_restart', False),
ConfigUtil.get_value_from_dict(conf, 'need_redeploy', False),
)
except:
pass
return self._src_data
def get_need_redeploy_items(self):
items = []
params = self.params
for name in params:
conf = params[name]
if conf.need_redeploy:
items.append(name)
return items
def get_need_restart_items(self):
items = []
params = self.params
for name in params:
conf = params[name]
if conf.need_restart:
items.append(name)
return items
def get_params_default(self):
temp = {}
params = self.params
for name in params:
conf = params[name]
temp[conf.name] = conf.default
return temp
class InstallPlugin(Plugin):
class FileItem(object):
def __init__(self, src_path, target_path, _type):
self.src_path = src_path
self.target_path = target_path
self.type = _type if _type else 'file'
PLUGIN_TYPE = PluginType.INSTALL
FILES_MAP_YAML = 'file_map.yaml'
FLAG_FILE = FILES_MAP_YAML
def __init__(self, component_name, plugin_path, version):
super(InstallPlugin, self).__init__(component_name, plugin_path, version)
self.file_map_path = os.path.join(self.plugin_path, self.FILES_MAP_YAML)
self._file_map = None
@property
def file_map(self):
if self._file_map is None:
try:
self._file_map = {}
with open(self.file_map_path, 'rb') as f:
file_map = yaml.load(f)
for data in file_map:
k = data['src_path']
if k[0] != '.':
k = '.%s' % os.path.join('/', k)
self._file_map[k] = InstallPlugin.FileItem(
k,
ConfigUtil.get_value_from_dict(data, 'target_path', k),
ConfigUtil.get_value_from_dict(data, 'type', None)
)
except:
pass
return self._file_map
def file_list(self):
file_map = self.file_map
return [file_map[k] for k in file_map]
class ComponentPluginLoader(object):
PLUGIN_TYPE = None
def __init__(self, home_path, plugin_type=PLUGIN_TYPE, stdio=None):
if plugin_type:
self.PLUGIN_TYPE = plugin_type
if not self.PLUGIN_TYPE:
raise NotImplementedError
self.plguin_cls = getattr(sys.modules[__name__], self.PLUGIN_TYPE.value, False)
if not self.plguin_cls:
raise ImportError(self.PLUGIN_TYPE.value)
self.stdio = stdio
self.path = home_path
self.component_name = os.path.split(self.path)[1]
self._plugins = {}
def get_plugins(self):
plugins = []
for flag_path in glob('%s/*/%s' % (self.path, self.plguin_cls.FLAG_FILE)):
if flag_path in self._plugins:
plugins.append(self._plugins[flag_path])
else:
path, _ = os.path.split(flag_path)
_, version = os.path.split(path)
plugin = self.plguin_cls(self.component_name, path, version)
self._plugins[flag_path] = plugin
plugins.append(plugin)
return plugins
def get_best_plugin(self, version):
version = version.split('.')
plugins = []
for plugin in self.get_plugins():
if plugin.version == version:
return plugin
if plugin.version < version:
plugins.append(plugin)
if plugins:
plugin = max(plugins, key=lambda x: x.version)
self.stdio and getattr(self.stdio, 'warn', print)(
'%s %s plugin version %s not found, use the best suitable version %s\n. Use `obd update` to update local plugin repository' %
(self.component_name, self.PLUGIN_TYPE.name.lower(), '.'.join(version), '.'.join(plugin.version))
)
return plugin
return None
class PyScriptPluginLoader(ComponentPluginLoader):
class PyScriptPluginType(object):
def __init__(self, name, value):
self.name = name
self.value = value
PLUGIN_TYPE = PluginType.PY_SCRIPT
def __init__(self, home_path, script_name=None, stdio=None):
if not script_name:
raise NotImplementedError
type_name = 'PY_SCRIPT_%s' % script_name.upper()
type_value = 'PyScript%sPlugin' % ''.join([word.capitalize() for word in script_name.split('_')])
self.PLUGIN_TYPE = PyScriptPluginLoader.PyScriptPluginType(type_name, type_value)
if not getattr(sys.modules[__name__], type_value, False):
self._create_(script_name)
super(PyScriptPluginLoader, self).__init__(home_path, stdio=stdio)
def _create_(self, script_name):
exec('''
class %s(PyScriptPlugin):
FLAG_FILE = '%s.py'
PLUGIN_COMPONENT_NAME = '%s'
def __init__(self, component_name, plugin_path, version):
super(%s, self).__init__(component_name, plugin_path, version)
@staticmethod
def set_plugin_type(plugin_type):
%s.PLUGIN_TYPE = plugin_type
@pyScriptPluginExec
def %s(self, components, ssh_clients, cluster_config, cmd, options, stdio, *arg, **kwargs):
pass
''' % (self.PLUGIN_TYPE.value, script_name, script_name, self.PLUGIN_TYPE.value, self.PLUGIN_TYPE.value, script_name))
clz = locals()[self.PLUGIN_TYPE.value]
setattr(sys.modules[__name__], self.PLUGIN_TYPE.value, clz)
clz.set_plugin_type(self.PLUGIN_TYPE)
return clz
class PluginManager(Manager):
RELATIVE_PATH = 'plugins'
# The directory structure for plugin is ./plugins/{component_name}/{version}
def __init__(self, home_path, stdio=None):
super(PluginManager, self).__init__(home_path, stdio=stdio)
self.component_plugin_loaders = {}
self.py_script_plugin_loaders = {}
for plugin_type in PluginType:
self.component_plugin_loaders[plugin_type] = {}
# PyScriptPluginLoader is a customized script loader. It needs special processing.
# Log off the PyScriptPluginLoader in component_plugin_loaders
del self.component_plugin_loaders[PluginType.PY_SCRIPT]
def get_best_plugin(self, plugin_type, component_name, version):
if plugin_type not in self.component_plugin_loaders:
return None
loaders = self.component_plugin_loaders[plugin_type]
if component_name not in loaders:
loaders[component_name] = ComponentPluginLoader(os.path.join(self.path, component_name), plugin_type, self.stdio)
loader = loaders[component_name]
return loader.get_best_plugin(version)
# 主要用于获取自定义Python脚本插件
# 相比于get_best_plugin,该方法可以获取到未在PluginType中注册的Python脚本插件
# 这个功能可以快速实现自定义插件,只要在插件仓库创建对应的python文件,并暴露出同名方法即可
# 使后续进一步实现全部流程可描述更容易实现
def get_best_py_script_plugin(self, script_name, component_name, version):
if script_name not in self.py_script_plugin_loaders:
self.py_script_plugin_loaders[script_name] = {}
loaders = self.py_script_plugin_loaders[script_name]
if component_name not in loaders:
loaders[component_name] = PyScriptPluginLoader(os.path.join(self.path, component_name), script_name, self.stdio)
loader = loaders[component_name]
return loader.get_best_plugin(version)
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import os
import sys
import hashlib
from glob import glob
from _rpm import Package
from _arch import getBaseArch
from tool import DirectoryUtil, FileUtil, YamlLoader
from _manager import Manager
class LocalPackage(Package):
class RpmObject(object):
def __init__(self, headers, files):
self.files = files
self.opens = {}
self.headers = headers
def __exit__(self, *arg, **kwargs):
for path in self.opens:
self.opens[path].close()
def __enter__(self):
self.__exit__()
self.opens = {}
return self
def extractfile(self, name):
if name not in self.files:
raise KeyError("member %s could not be found" % name)
path = self.files[name]
if path not in self.opens:
self.opens[path] = open(path, 'rb')
return self.opens[path]
def __init__(self, path, name, version, files, release=None, arch=None):
self.name = name
self.version = version
self.md5 = None
self.release = release if release else version
self.arch = arch if arch else getBaseArch()
self.headers = {}
self.files = files
self.path = path
self.package()
def package(self):
count = 0
dirnames = []
filemd5s = []
filemodes = []
basenames = []
dirindexes = []
filelinktos = []
dirnames_map = {}
m_sum = hashlib.md5()
for src_path in self.files:
target_path = self.files[src_path]
dirname, basename = os.path.split(src_path)
if dirname not in dirnames_map:
dirnames.append(dirname)
dirnames_map[dirname] = count
count += 1
basenames.append(basename)
dirindexes.append(dirnames_map[dirname])
if os.path.islink(target_path):
filemd5s.append('')
filelinktos.append(os.readlink(target_path))
filemodes.append(-24065)
else:
m = hashlib.md5()
with open(target_path, 'rb') as f:
m.update(f.read())
m_value = m.hexdigest().encode(sys.getdefaultencoding())
m_sum.update(m_value)
filemd5s.append(m_value)
filelinktos.append('')
filemodes.append(os.stat(target_path).st_mode)
self.headers = {
'dirnames': dirnames,
'filemd5s': filemd5s,
'filemodes': filemodes,
'basenames': basenames,
'dirindexes': dirindexes,
'filelinktos': filelinktos,
}
self.md5 = m_sum.hexdigest()
def open(self):
return self.RpmObject(self.headers, self.files)
class Repository(object):
_DATA_FILE = '.data'
def __init__(self, name, repository_dir, stdio=None):
self.repository_dir = repository_dir
self.name = name
self.version = None
self.hash = None
self.stdio = stdio
self._load()
def __str__(self):
return '%s-%s-%s' % (self.name, self.version, self.hash)
def __hash__(self):
return hash(self.repository_dir)
def is_shadow_repository(self):
if os.path.exists(self.repository_dir):
return os.path.islink(self.repository_dir)
return False
@property
def data_file_path(self):
path = os.readlink(self.repository_dir) if os.path.islink(self.repository_dir) else self.repository_dir
return os.path.join(path, Repository._DATA_FILE)
# 暂不清楚开源的rpm requirename是否仅有必须的依赖
def require_list(self):
return []
# 暂不清楚开源的rpm requirename是否仅有必须的依赖 故先使用 ldd检查bin文件的形式检查依赖
def bin_list(self, plugin):
files = []
if self.version and self.hash:
for file_item in plugin.file_list():
if file_item.type == 'bin':
files.append(os.path.join(self.repository_dir, file_item.target_path))
return files
def file_list(self, plugin):
files = []
if self.version and self.hash:
for file_item in plugin.file_list():
files.append(os.path.join(self.repository_dir, file_item.target_path))
return files
def file_check(self, plugin):
for file_path in self.file_list(plugin):
if not os.path.exists(file_path):
return False
return True
def __eq__(self, other):
if isinstance(other, self.__class__):
return self.version == other.version and self.hash == other.hash
if isinstance(other, dict):
return self.version == other['version'] and self.hash == other['hash']
def _load(self):
try:
with open(self.data_file_path, 'r') as f:
data = YamlLoader().load(f)
self.version = data['version']
self.hash = data['hash']
except:
pass
def _parse_path(self):
if self.is_shadow_repository():
path = os.readlink(self.repository_dir)
else:
path = self.repository_dir
path = path.strip('/')
path, _hash = os.path.split(path)
path, version = os.path.split(path)
if not self.version:
self.version = version
def _dump(self):
data = {'version': self.version, 'hash': self.hash}
try:
with open(self.data_file_path, 'w') as f:
YamlLoader().dump(data, f)
return True
except:
self.stdio and getattr(self.stdio, 'exception', print)('dump %s to %s failed' % (data, self.data_file_path))
return False
def load_pkg(self, pkg, plugin):
if self.is_shadow_repository():
self.stdio and getattr(self.stdio, 'print', '%s is a shadow repository' % self)
return False
hash_path = os.path.join(self.repository_dir, '.hash')
if self.hash == pkg.md5 and self.file_check(plugin):
return True
self.clear()
try:
file_map = plugin.file_map
with pkg.open() as rpm:
files = {}
links = {}
dirnames = rpm.headers.get("dirnames")
basenames = rpm.headers.get("basenames")
dirindexes = rpm.headers.get("dirindexes")
filelinktos = rpm.headers.get("filelinktos")
filemd5s = rpm.headers.get("filemd5s")
filemodes = rpm.headers.get("filemodes")
for i in range(len(basenames)):
path = os.path.join(dirnames[dirindexes[i]], basenames[i])
if isinstance(path, bytes):
path = path.decode()
if not path.startswith('./'):
path = '.%s' % path
files[path] = i
for src_path in file_map:
if src_path not in files:
raise Exception('%s not found in packge' % src_path)
idx = files[src_path]
file_item = file_map[src_path]
target_path = os.path.join(self.repository_dir, file_item.target_path)
if filemd5s[idx]:
fd = rpm.extractfile(src_path)
self.stdio and getattr(self.stdio, 'verbose', print)('extract %s to %s' % (src_path, target_path))
with FileUtil.open(target_path, 'wb', self.stdio) as f:
FileUtil.copy_fileobj(fd, f)
mode = filemodes[idx] & 0x1ff
if mode != 0o744:
os.chmod(target_path, mode)
elif filelinktos[idx]:
links[target_path] = filelinktos[idx]
else:
raise Exception('%s is directory' % src_path)
for link in links:
self.stdio and getattr(self.stdio, 'verbose', print)('link %s to %s' % (src_path, target_path))
os.symlink(links[link], link)
self.version = pkg.version
self.hash = pkg.md5
if self._dump():
return True
else:
self.clear()
except:
self.stdio and getattr(self.stdio, 'exception', print)('failed to extract file from %s' % pkg.path)
self.clear()
return False
def clear(self):
return DirectoryUtil.rm(self.repository_dir, self.stdio) and DirectoryUtil.mkdir(self.repository_dir, stdio=self.stdio)
class ComponentRepository(object):
def __init__(self, name, repository_dir, stdio=None):
self.repository_dir = repository_dir
self.stdio = stdio
self.name = name
DirectoryUtil.mkdir(self.repository_dir, stdio=stdio)
def get_instance_repositories(self, version):
repositories = {}
for tag in os.listdir(self.repository_dir):
path = os.path.join(self.repository_dir, tag)
if os.path.islink(path):
continue
repository = Repository(self.name, path, self.stdio)
if repository.hash:
repositories[repository.hash] = repository
return repositories
def get_shadow_repositories(self, version, instance_repositories={}):
repositories = {}
for tag in os.listdir(self.repository_dir):
path = os.path.join(self.repository_dir, tag)
if not os.path.islink(path):
continue
_, md5 = os.path.split(os.readlink(path))
if md5 in instance_repositories:
repositories[tag] = instance_repositories[md5]
else:
repository = Repository(self.name, path, self.stdio)
if repository.hash:
repositories[repository.hash] = repository
return repositories
def get_repository_by_version(self, version, tag=None):
path_partten = os.path.join(self.repository_dir, version, tag if tag else '*')
for path in glob(path_partten):
repository = Repository(self.name, path, self.stdio)
if repository.hash:
return repository
return None
def get_repository_by_tag(self, tag, version=None):
path_partten = os.path.join(self.repository_dir, version if version else '*', tag)
for path in glob(path_partten):
repository = Repository(self.name, path, self.stdio)
if repository.hash:
return repository
return None
def get_repository(self, version=None, tag=None):
if version:
return self.get_repository_by_version(version, tag)
version = []
for rep_version in os.listdir(self.repository_dir):
rep_version = rep_version.split('.')
if rep_version > version:
version = rep_version
if version:
return self.get_repository_by_version('.'.join(version), tag)
return None
class RepositoryManager(Manager):
RELATIVE_PATH = 'repository'
# repository目录结构为 ./repository/{component_name}/{version}/{tag or hash}
def __init__(self, home_path, stdio=None):
super(RepositoryManager, self).__init__(home_path, stdio=stdio)
self.repositories = {}
self.component_repositoies = {}
def get_repositoryies(self, name):
repositories = {}
path_partten = os.path.join(self.path, name, '*')
for path in glob(path_partten):
_, version = os.path.split(path)
Repository = Repository(name, path, version, self.stdio)
def get_repository_by_version(self, name, version, tag=None, instance=True):
if not tag:
tag = name
path = os.path.join(self.path, name, version, tag)
if path not in self.repositories:
if name not in self.component_repositoies:
self.component_repositoies[name] = ComponentRepository(name, os.path.join(self.path, name), self.stdio)
repository = self.component_repositoies[name].get_repository(version, tag)
if repository:
self.repositories[repository.repository_dir] = repository
self.repositories[path] = repository
else:
repository = self.repositories[path]
return self.get_instance_repository_from_shadow(repository) if instance else repository
def get_repository(self, name, version=None, tag=None, instance=True):
if version:
return self.get_repository_by_version(name, version, tag)
if not tag:
tag = name
if name not in self.component_repositoies:
path = os.path.join(self.path, name)
self.component_repositoies[name] = ComponentRepository(name, path, self.stdio)
repository = self.component_repositoies[name].get_repository(version, tag)
if repository:
self.repositories[repository.repository_dir] = repository
return self.get_instance_repository_from_shadow(repository) if repository and instance else repository
def create_instance_repository(self, name, version, _hash):
path = os.path.join(self.path, name, version, _hash)
if path not in self.repositories:
self._mkdir(path)
repository = Repository(name, path, self.stdio)
self.repositories[path] = repository
return self.repositories[path]
def get_repository_allow_shadow(self, name, version, tag=None):
path = os.path.join(self.path, name, version, tag if tag else name)
if os.path.exists(path):
if path not in self.repositories:
self.repositories[path] = Repository(name, path, self.stdio)
return self.repositories[path]
repository = Repository(name, path, self.stdio)
repository.version = version
return repository
def create_tag_for_repository(self, repository, tag, force=False):
if repository.is_shadow_repository():
return False
path = os.path.join(self.path, repository.name, repository.version, tag)
if os.path.exists(path):
if not os.path.islink(path):
return False
src_path = os.readlink(path)
if os.path.normcase(src_path) == os.path.normcase(repository.repository_dir):
return True
if not force:
return False
DirectoryUtil.rm(path)
try:
os.symlink(repository.repository_dir, path)
return True
except:
pass
return False
def get_instance_repository_from_shadow(self, repository):
if not isinstance(repository, Repository) or not repository.is_shadow_repository():
return repository
try:
path = os.readlink(repository.repository_dir)
if path not in self.repositories:
self.repositories[path] = Repository(repository.name, path, self.stdio)
return self.repositories[path]
except:
pass
return None
\ No newline at end of file
# coding: utf-8
# OceanBase Deploy.
# Copyright (C) 2021 OceanBase
#
# This file is part of OceanBase Deploy.
#
# OceanBase Deploy is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OceanBase Deploy is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with OceanBase Deploy. If not, see <https://www.gnu.org/licenses/>.
from __future__ import absolute_import, division, print_function
import os
import sys
import rpmfile
# py3和py2 的 lzma 模块不同
# python3 标准库中有 lzma
# python2下没有lzma这个三方库,而是使用动态库。pip 提供了pyliblzma这个三方库,这个也是动态库。与centos python自带的是一样的
# python2的lzma的api与py3 的不同
# python2 三方库中backports.lzma的api与python3的相同
# 但rpmfile中 只会尝试 import lzma。这在python2下 只能
# 故这里先import rpmfile, 在为rpmfile 注入 正确的 lzma依赖
if sys.version_info.major == 2:
from backports import lzma
setattr(sys.modules['rpmfile'], 'lzma', getattr(sys.modules[__name__], 'lzma'))
class Package(object):
def __init__(self, path):
self.path = path
with self.open() as rpm:
self.name = rpm.headers.get('name').decode()
self.version = rpm.headers.get('version').decode()
self.release = rpm.headers.get('release').decode()
self.arch = rpm.headers.get('arch').decode()
self.md5 = rpm.headers.get('md5').decode()
def __str__(self):
return 'name: %s\nversion: %s\nrelease:%s\narch: %s\nmd5: %s' % (self.name, self.version, self.release, self.arch, self.md5)
@property
def file_name(self):
return '%s-%s-%s.%s.rpm' % (self.name, self.version, self.release, self.arch)
def open(self):
return rpmfile.open(self.path)
此差异已折叠。
# /bin/bash
if [ `id -u` != 0 ] ; then
echo "Please use root to run"
fi
obd_dir=`dirname $0`
python_bin='/usr/bin/python'
python_path=`whereis python`
for bin in ${python_path[@]}; do
if [ -x $bin ]; then
python_bin=$bin
break 1
fi
done
read -p "Enter python path [default $python_bin]:"
if [ "x$REPLY" != "x" ]; then
python_bin=$REPLY
fi
rm -fr /usr/obd && mkdir -p /usr/obd
rm -fr $obd_dir/mirror/remote -p $obd_dir/mirror/remote && cd $obd_dir/mirror/remote
wget http://yum.tbsite.net/mirrors/oceanbase/OceanBase.repo
cp -r -d $obd_dir/* /usr/obd
cd /usr/obd/plugins && ln -sf oceanbase oceanbase-ce
cp -f /usr/obd/profile/obd.sh /etc/profile.d/obd.sh
rm -fr /usr/bin/obd
echo -e "# /bin/bash\n$python_bin /usr/obd/_cmd.py \$*" > /usr/bin/obd
chmod +x /usr/bin/obd
echo -e 'Installation of obd finished successfully\nPlease source /etc/profile.d/obd.sh to enable it'
\ No newline at end of file
此差异已折叠。
## Only need to configure when remote login is required
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
oceanbase-ce:
servers:
- name: z1
# Please don't use hostname, only IP can be supported
ip: 172.19.33.2
- name: z2
ip: 172.19.33.3
- name: z3
ip: 172.19.33.4
global:
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: eth0
# if current hardware's memory capacity is smaller than 50G, please use the setting of "mini-single-example.yaml" and do a small adjustment.
memory_limit: 64G
datafile_disk_percentage: 20
cluster_id: 1
# In this example , support multiple ob process in single node, so different process use different ports.
# If deploy ob cluster in multiple nodes, the port and path setting can be same.
z1:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone1
z2:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone2
z3:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone3
## Only need to configure when remote login is required
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
oceanbase-ce:
servers:
- name: z1
# Please don't use hostname, only IP can be supported
ip: 172.19.33.2
- name: z2
ip: 172.19.33.3
- name: z3
ip: 172.19.33.4
global:
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: eth0
# if current hardware's memory capacity is smaller than 50G, please use the setting of "mini-single-example.yaml" and do a small adjustment.
memory_limit: 64G
datafile_disk_percentage: 20
cluster_id: 1
# In this example , support multiple ob process in single node, so different process use different ports.
# If deploy ob cluster in multiple nodes, the port and path setting can be same.
z1:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone1
z2:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone2
z3:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone3
obproxy:
servers:
- 192.168.1.5
global:
listen_port: 2883
home_path: /root/obproxy
# oceanbase root server list
# format: ip:mysql_port,ip:mysql_port
rs_list: 192.168.1.2:2883;192.168.1.3:2883;192.168.1.4:2883
enable_cluster_checkout: false
\ No newline at end of file
oceanbase-ce:
servers:
# Please don't use hostname, only IP can be supported
- 127.0.0.1
global:
home_path: /root/observer
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: lo
mysql_port: 2883
rpc_port: 2882
zone: zone1
# if current hardware's memory capacity is smaller than 50G, please use the setting of "mini-single-example.yaml" and do a small adjustment.
memory_limit: 64G
datafile_disk_percentage: 20
cluster_id: 1
\ No newline at end of file
## Only need to configure when remote login is required
# user:
# username: your username
# password: your password if need
# key_file: your ssh-key file path if need
oceanbase-ce:
servers:
- name: z1
# Please don't use hostname, only IP can be supported
ip: 172.19.33.2
- name: z2
ip: 172.19.33.3
- name: z3
ip: 172.19.33.4
global:
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: eth0
cluster_id: 1
datafile_size: 8G
# please set memory limit to a suitable value which is matching resource.
memory_limit: 8G
system_memory: 4G
stack_size: 512K
cpu_count: 16
cache_wash_threshold: 1G
__min_full_resource_pool_memory: 268435456
workers_per_cpu_quota: 10
schema_history_expire_time: 1d
# The value of net_thread_count had better be same as cpu's core number.
net_thread_count: 4
major_freeze_duty_time: Disable
minor_freeze_times: 10
enable_separate_sys_clog: 0
enable_merge_by_turn: FALSE
datafile_disk_percentage: 20
z1:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone1
z2:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone2
z3:
mysql_port: 2883
rpc_port: 2882
home_path: /root/observer
zone: zone3
此差异已折叠。
oceanbase-ce:
servers:
# Please don't use hostname, only IP can be supported
- 127.0.0.1
global:
home_path: /root/observer
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
# if set severs as "127.0.0.1", please set devname as "lo"
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
devname: lo
mysql_port: 2883
rpc_port: 2882
zone: zone1
cluster_id: 1
datafile_size: 8G
# please set memory limit to a suitable value which is matching resource.
memory_limit: 8G
system_memory: 4G
stack_size: 512K
cpu_count: 16
cache_wash_threshold: 1G
__min_full_resource_pool_memory: 268435456
workers_per_cpu_quota: 10
schema_history_expire_time: 1d
# The value of net_thread_count had better be same as cpu's core number.
net_thread_count: 4
sys_bkgd_migration_retry_num: 3
minor_freeze_times: 10
enable_separate_sys_clog: 0
enable_merge_by_turn: FALSE
datafile_disk_percentage: 20
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
MySQL-python==1.2.5
\ No newline at end of file
PyMySQL==1.0.2
\ No newline at end of file
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
use oceanbase;
create user 'admin' IDENTIFIED BY 'admin';
grant all on *.* to 'admin' WITH GRANT OPTION;
create database obproxy;
alter system set _enable_split_partition = true;
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
- src_path: ./home/admin/obproxy-3.1.0/bin/obproxy
target_path: bin/obproxy
type: bin
mode: 755
\ No newline at end of file
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册