提交 002a120d 编写于 作者: B bao liang 提交者: easyscheduler

refactor zk client (#687)

* update english documents

* refactor zk client

* update documents

* update zkclient

* update zkclient

* update documents

* add architecture-design

* change i18n

* update i18n
上级 8cf5911f
Easy Scheduler
============
[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
[![Total Lines](https://tokei.rs/b1/github/analysys/EasyScheduler?category=lines)](https://github.com/analysys/EasyScheduler)
> Easy Scheduler for Big Data
[![Stargazers over time](https://starchart.cc/analysys/EasyScheduler.svg)](https://starchart.cc/analysys/EasyScheduler)
[![EN doc](https://img.shields.io/badge/document-English-blue.svg)](README.md)
[![CN doc](https://img.shields.io/badge/文档-中文版-blue.svg)](README_zh_CN.md)
### Design features:
A distributed and easy-to-expand visual DAG workflow scheduling system. Dedicated to solving the complex dependencies in data processing, making the scheduling system `out of the box` for data processing.
Its main objectives are as follows:
- Associate the Tasks according to the dependencies of the tasks in a DAG graph, which can visualize the running state of task in real time.
- Support for many task types: Shell, MR, Spark, SQL (mysql, postgresql, hive, sparksql), Python, Sub_Process, Procedure, etc.
- Support process scheduling, dependency scheduling, manual scheduling, manual pause/stop/recovery, support for failed retry/alarm, recovery from specified nodes, Kill task, etc.
- Support process priority, task priority and task failover and task timeout alarm/failure
- Support process global parameters and node custom parameter settings
- Support online upload/download of resource files, management, etc. Support online file creation and editing
- Support task log online viewing and scrolling, online download log, etc.
- Implement cluster HA, decentralize Master cluster and Worker cluster through Zookeeper
- Support online viewing of `Master/Worker` cpu load, memory
- Support process running history tree/gantt chart display, support task status statistics, process status statistics
- Support backfilling data
- Support multi-tenant
- Support internationalization
- There are more waiting partners to explore
### What's in Easy Scheduler
Stability | Easy to use | Features | Scalability |
-- | -- | -- | --
Decentralized multi-master and multi-worker | Visualization process defines key information such as task status, task type, retry times, task running machine, visual variables and so on at a glance.  |  Support pause, recover operation | support custom task types
HA is supported by itself | All process definition operations are visualized, dragging tasks to draw DAGs, configuring data sources and resources. At the same time, for third-party systems, the api mode operation is provided. | Users on easyscheduler can achieve many-to-one or one-to-one mapping relationship through tenants and Hadoop users, which is very important for scheduling large data jobs. " Supports traditional shell tasks, while supporting large data platform task scheduling: MR, Spark, SQL (mysql, postgresql, hive, sparksql), Python, Procedure, Sub_Process | The scheduler uses distributed scheduling, and the overall scheduling capability will increase linearly with the scale of the cluster. Master and Worker support dynamic online and offline.
Overload processing: Task queue mechanism, the number of schedulable tasks on a single machine can be flexibly configured, when too many tasks will be cached in the task queue, will not cause machine jam. | One-click deployment | Supports traditional shell tasks, and also support big data platform task scheduling: MR, Spark, SQL (mysql, postgresql, hive, sparksql), Python, Procedure, Sub_Process | |
### System partial screenshot
![image](https://user-images.githubusercontent.com/48329107/61368744-1f5f3b00-a8c1-11e9-9cf1-10f8557a6b3b.png)
![image](https://user-images.githubusercontent.com/48329107/61368966-9dbbdd00-a8c1-11e9-8dcc-a9469d33583e.png)
![image](https://user-images.githubusercontent.com/48329107/61372146-f347b800-a8c8-11e9-8882-66e8934ada23.png)
### Document
- <a href="https://analysys.github.io/easyscheduler_docs_cn/后端部署文档.html" target="_blank">Backend deployment documentation</a>
- <a href="https://analysys.github.io/easyscheduler_docs_cn/前端部署文档.html" target="_blank">Front-end deployment documentation</a>
- [**User manual**](https://analysys.github.io/easyscheduler_docs_cn/系统使用手册.html?_blank "User manual")
- [**Upgrade document**](https://analysys.github.io/easyscheduler_docs_cn/升级文档.html?_blank "Upgrade document")
- <a href="http://52.82.13.76:8888" target="_blank">Online Demo</a>
More documentation please refer to <a href="https://analysys.github.io/easyscheduler_docs_cn/" target="_blank">[EasyScheduler online documentation]</a>
### Recent R&D plan
Work plan of Easy Scheduler: [R&D plan](https://github.com/analysys/EasyScheduler/projects/1), where `In Develop` card is the features of 1.1.0 version , TODO card is to be done (including feature ideas)
### How to contribute code
Welcome to participate in contributing code, please refer to the process of submitting the code:
[[How to contribute code](https://github.com/analysys/EasyScheduler/issues/310)]
### Thanks
Easy Scheduler uses a lot of excellent open source projects, such as google guava, guice, grpc, netty, ali bonecp, quartz, and many open source projects of apache, etc.
It is because of the shoulders of these open source projects that the birth of the Easy Scheduler is possible. We are very grateful for all the open source software used! We also hope that we will not only be the beneficiaries of open source, but also be open source contributors, so we decided to contribute to easy scheduling and promised long-term updates. We also hope that partners who have the same passion and conviction for open source will join in and contribute to open source!
### Get Help
The fastest way to get response from our developers is to submit issues, or add our wechat : 510570367
### License
Please refer to [LICENSE](https://github.com/analysys/EasyScheduler/blob/dev/LICENSE) file.
......@@ -20,8 +20,7 @@
- Task State Statistics: It refers to the statistics of the number of tasks to be run, failed, running, completed and succeeded in a given time frame.
- Process State Statistics: It refers to the statistics of the number of waiting, failing, running, completing and succeeding process instances in a specified time range.
- Process Definition Statistics: The process definition created by the user and the process definition granted by the administrator to the user are counted.
- Queue statistics: Worker performs queue statistics, the number of tasks to be performed and the number of tasks to be killed
- Command Status Statistics: Statistics of the Number of Commands Executed
### Creating Process definitions
- Go to the project home page, click "Process definitions" and enter the list page of process definition.
......@@ -30,7 +29,7 @@
- Fill in the Node Name, Description, and Script fields.
- Selecting "task priority" will give priority to high-level tasks in the execution queue. Tasks with the same priority will be executed in the first-in-first-out order.
- Timeout alarm. Fill in "Overtime Time". When the task execution time exceeds the overtime, it can alarm and fail over time.
- Fill in "Custom Parameters" and refer to [Custom Parameters](#用户自定义参数)
- Fill in "Custom Parameters" and refer to [Custom Parameters](#Custom Parameters)
<p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61778402-42459e00-ae31-11e9-96c6-8fd7fed8fed2.png" width="60%" />
</p>
......@@ -57,15 +56,13 @@
- **The process definition of the off-line state can be edited, but not run**, so the on-line workflow is the first step.
> Click on the Process definition, return to the list of process definitions, click on the icon "online", online process definition.
> Before offline process, it is necessary to offline timed management before offline process can be successfully defined.
>
>
> Before setting workflow offline, the timed tasks in timed management should be offline, so that the definition of workflow can be set offline successfully.
- Click "Run" to execute the process. Description of operation parameters:
* Failure strategy:**When a task node fails to execute, other parallel task nodes need to execute the strategy**。”Continue "Representation: Other task nodes perform normally" and "End" Representation: Terminate all ongoing tasks and terminate the entire process.
* Failure strategy:**When a task node fails to execute, other parallel task nodes need to execute the strategy**。”Continue "Representation: Other task nodes perform normally", "End" Representation: Terminate all ongoing tasks and terminate the entire process.
* Notification strategy:When the process is over, send process execution information notification mail according to the process status.
* Process priority: The priority of process running is divided into five levels:the highest , the high , the medium , the low , and the lowest . High-level processes are executed first in the execution queue, and processes with the same priority are executed first in first out order.
* Worker group This process can only be executed in a specified machine group. Default, by default, can be executed on any worker.
* Process priority: The priority of process running is divided into five levels:the highest, the high, the medium, the low, and the lowest . High-level processes are executed first in the execution queue, and processes with the same priority are executed first in first out order.
* Worker group: This process can only be executed in a specified machine group. Default, by default, can be executed on any worker.
* Notification group: When the process ends or fault tolerance occurs, process information is sent to all members of the notification group by mail.
* Recipient: Enter the mailbox and press Enter key to save. When the process ends and fault tolerance occurs, an alert message is sent to the recipient list.
* Cc: Enter the mailbox and press Enter key to save. When the process is over and fault-tolerant occurs, alarm messages are copied to the copier list.
......@@ -78,7 +75,7 @@
<img src="https://user-images.githubusercontent.com/53217792/61780083-6a82cc00-ae34-11e9-9839-fda9153f693b.png" width="60%" />
</p>
> SComplement execution mode includes serial execution and parallel execution. In serial mode, the complement will be executed sequentially from May 1 to May 10. In parallel mode, the tasks from May 1 to May 10 will be executed simultaneously.
> Complement execution mode includes serial execution and parallel execution. In serial mode, the complement will be executed sequentially from May 1 to May 10. In parallel mode, the tasks from May 1 to May 10 will be executed simultaneously.
### Timing Process Definition
- Create Timing: "Process Definition - > Timing"
......@@ -340,28 +337,28 @@ conf/common/hadoop.properties
Create queues
### Create queues
- Queues are used to execute spark, mapreduce and other programs, which require the use of "queue" parameters.
- Security - > Queue Manage - > Creat Queue
- "Security" - > "Queue Manage" - > "Creat Queue"
<p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61841945-078f4480-aec9-11e9-92fb-05b6f42f07d6.png" width="60%" />
</p>
### Create Tenants
- The tenant corresponds to the user of Linux, which is used by the worker to submit jobs. If Linux does not have this user, the worker creates the user when executing the script.
- Tenant Code:**the tenant code is the only user on Linux that can't be duplicated.**
- The tenant corresponds to the account of Linux, which is used by the worker server to submit jobs. If Linux does not have this user, the worker would create the account when executing the task.
- Tenant Code:**the tenant code is the only account on Linux that can't be duplicated.**
<p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61842372-8042d080-aeca-11e9-8c54-e3dee583eeff.png" width="60%" />
</p>
### Create Ordinary Users
- Users are divided into **administrator users** and **ordinary users**.
* Administrators have only **authorization and user management** privileges, and no privileges to **create project and process-defined operations**.
- User types are **ordinary users** and **administrator users**..
* Administrators have **authorization and user management** privileges, and no privileges to **create project and process-defined operations**.
* Ordinary users can **create projects and create, edit, and execute process definitions**.
* Note: **If the user switches the tenant, all resources under the tenant will be copied to the switched new tenant.**
<p align="center">
......@@ -376,8 +373,8 @@ Create queues
</p>
### Create Worker Group
- Worker grouping provides a mechanism for tasks to run on a specified worker. Administrators set worker groups, and each task node can set worker groups for the task to run. If the task-specified groups are deleted or no groups are specified, the task will run on the worker specified by the process instance.
- Multiple IP addresses within a worker group (**no aliases can be written**), separated by **commas in English**
- Worker group provides a mechanism for tasks to run on a specified worker. Administrators create worker groups, which can be specified in task nodes and operation parameters. If the specified grouping is deleted or no grouping is specified, the task will run on any worker.
- Multiple IP addresses within a worker group (**aliases can not be written**), separated by **commas in English**
<p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61842630-6b1a7180-aecb-11e9-8988-b4444de16b36.png" width="60%" />
......@@ -454,8 +451,6 @@ Create queues
#### Worker monitor
- Mainly related information of worker.
<p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61843277-ae75df80-aecd-11e9-9667-b9f1615b6f3b.png" width="60%" />
</p>
......@@ -495,7 +490,7 @@ Create queues
- Custom parameters: User-defined parameters that are part of SHELL replace the contents of scripts with ${variables}
### SUB_PROCESS
- The sub-process node is to execute an external workflow definition as its own task node.
- The sub-process node is to execute an external workflow definition as an task node.
> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SUB_PROCESS.png) task node in the toolbar onto the palette and double-click the task node as follows:
<p align="center">
......
此差异已折叠。
......@@ -15,13 +15,12 @@
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/project.png" width="60%" />
</p>
> 项目首页其中包含任务状态统计,流程状态统计、流程定义统计、队列统计、命令统计
> 项目首页其中包含任务状态统计,流程状态统计、流程定义统计
- 任务状态统计:是指在指定时间范围内,统计任务实例中的待运行、失败、运行中、完成、成功的个数
- 流程状态统计:是指在指定时间范围内,统计流程实例中的待运行、失败、运行中、完成、成功的个数
- 流程定义统计:是统计该用户创建的流程定义及管理员授予该用户的流程定义
- 队列统计: worker执行队列统计,待执行的任务和待杀掉的任务个数
- 命令统计: 执行命令个数统计
### 创建工作流定义
- 进入项目首页,点击“工作流定义”,进入流程定义列表页。
......@@ -46,7 +45,7 @@
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/dag3.png" width="60%" />
</p>
- 点击”保存“,输入流程定义名称,流程定义描述,设置全局参数。
- 点击”保存“,输入流程定义名称,流程定义描述,设置全局参数,参考[自定义参数](#用户自定义参数)
<p align="center">
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/dag4.png" width="60%" />
......@@ -58,7 +57,7 @@
- **未上线状态的流程定义可以编辑,但是不可以运行**,所以先上线工作流
> 点击工作流定义,返回流程定义列表,点击”上线“图标,上线工作流定义。
> "下线"工作流之前,要先将定时管理的定时下线,才能成功下线工作流定义
> 下线工作流定义的时候,要先将定时管理中的定时任务下线,这样才能成功下线工作流定义
- 点击”运行“,执行工作流。运行参数说明:
* 失败策略:**当某一个任务节点执行失败时,其他并行的任务节点需要执行的策略**。”继续“表示:其他任务节点正常执行,”结束“表示:终止所有正在执行的任务,并终止整个流程。
......@@ -344,7 +343,7 @@ conf/common/hadoop.properties
### 创建普通用户
- 用户分为**管理员用户****普通用户**
* 管理员**授权和用户管理**等权限,没有**创建项目和流程定义**的操作的权限
* 管理员有**授权和用户管理**等权限,没有**创建项目和流程定义**的操作的权限
* 普通用户可以**创建项目和对流程定义的创建,编辑,执行**等操作。
* 注意:**如果该用户切换了租户,则该用户所在租户下所有资源将复制到切换的新租户下**
<p align="center">
......@@ -360,7 +359,7 @@ conf/common/hadoop.properties
</p>
### 创建worker分组
- worker分组,提供了一种让任务在指定的worker上运行的机制。管理员设置worker分组,每个任务节点可以设置该任务运行的worker分组,如果任务指定的分组被删除或者没有指定分组,则该任务会在流程实例指定的worker上运行。
- worker分组,提供了一种让任务在指定的worker上运行的机制。管理员创建worker分组,在任务节点和运行参数中设置中可以指定该任务运行的worker分组,如果指定的分组被删除或者没有指定分组,则该任务会在任一worker上运行。
- worker分组内多个ip地址(**不能写别名**),以**英文逗号**分隔
<p align="center">
......@@ -476,7 +475,7 @@ conf/common/hadoop.properties
- 自定义参数:是SHELL局部的用户自定义参数,会替换脚本中以${变量}的内容
### 子流程节点
- 子流程节点,就是把外部的某个工作流定义当做自己的一个任务节点去执行。
- 子流程节点,就是把外部的某个工作流定义当做一个任务节点去执行。
> 拖动工具栏中的![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SUB_PROCESS.png)任务节点到画板中,双击任务节点,如下图:
<p align="center">
......
......@@ -246,8 +246,9 @@ public abstract class AbstractZKClient {
return registerPath;
}
registerPath = createZNodePath(ZKNodeType.MASTER);
logger.info("register {} node {} success", zkNodeType.toString(), registerPath);
// handle dead server
// handle dead server
handleDeadServer(registerPath, zkNodeType, Constants.DELETE_ZK_OP);
return registerPath;
......
......@@ -215,7 +215,7 @@ public class MasterServer implements CommandLineRunner, IStoppable {
if(Stopper.isRunning()) {
// send heartbeat to zk
if (StringUtils.isBlank(zkMasterClient.getMasterZNode())) {
logger.error("master send heartbeat to zk failed: can't find zookeeper regist path of master server");
logger.error("master send heartbeat to zk failed: can't find zookeeper path of master server");
return;
}
......
......@@ -170,6 +170,7 @@ public class ZKMasterClient extends AbstractZKClient {
if(StringUtils.isEmpty(serverPath)){
System.exit(-1);
}
masterZNode = serverPath;
} catch (Exception e) {
logger.error("register master failure : " + e.getMessage(),e);
System.exit(-1);
......
......@@ -100,7 +100,6 @@ public class ZKWorkerClient extends AbstractZKClient {
if(zkWorkerClient == null){
zkWorkerClient = new ZKWorkerClient();
}
return zkWorkerClient;
}
......@@ -112,19 +111,6 @@ public class ZKWorkerClient extends AbstractZKClient {
return serverDao;
}
public String initWorkZNode() throws Exception {
String heartbeatZKInfo = ResInfo.getHeartBeatInfo(new Date());
workerZNode = getZNodeParentPath(ZKNodeType.WORKER) + "/" + OSUtils.getHost() + "_";
workerZNode = zkClient.create().withMode(CreateMode.EPHEMERAL_SEQUENTIAL).forPath(workerZNode,
heartbeatZKInfo.getBytes());
logger.info("register worker node {} success", workerZNode);
return workerZNode;
}
/**
* register worker
*/
......@@ -134,6 +120,7 @@ public class ZKWorkerClient extends AbstractZKClient {
if(StringUtils.isEmpty(serverPath)){
System.exit(-1);
}
workerZNode = serverPath;
} catch (Exception e) {
logger.error("register worker failure : " + e.getMessage(),e);
System.exit(-1);
......
......@@ -51,7 +51,7 @@
</div>
<div class="clearfix list">
<div class="text">
Worker分组
{{$t('Worker group')}}
</div>
<div class="cont">
<m-worker-groups v-model="workerGroupId"></m-worker-groups>
......
......@@ -21,7 +21,7 @@
</div>
</div>
<div class="clearfix list">
<x-button type="info" style="margin-left:20px" shape="circle" :loading="spinnerLoading" @click="preview()">执行时间</x-button>
<x-button type="info" style="margin-left:20px" shape="circle" :loading="spinnerLoading" @click="preview()">{{$t('Execute time')}}</x-button>
<div class="text">
{{$t('Timing')}}
</div>
......@@ -46,7 +46,7 @@
</div>
</div>
<div class="clearfix list">
<div style = "padding-left: 150px;">未来五次执行时间</div>
<div style = "padding-left: 150px;">{{$t('Next five execution times')}}</div>
<ul style = "padding-left: 150px;">
<li v-for="time in previewTimes">{{time}}</li>
</ul>
......@@ -90,7 +90,7 @@
</div>
<div class="clearfix list">
<div class="text">
Worker分组
{{$t('Worker group')}}
</div>
<div class="cont">
<m-worker-groups v-model="workerGroupId"></m-worker-groups>
......
......@@ -471,5 +471,7 @@ export default {
'Startup parameter': 'Startup parameter',
'Startup type': 'Startup type',
'warning of timeout': 'warning of timeout',
'Next five execution times': 'Next five execution times',
'Execute time': 'Execute time',
'Complement range': 'Complement range'
}
......@@ -472,5 +472,7 @@ export default {
'Startup parameter': '启动参数',
'Startup type': '启动类型',
'warning of timeout': '超时告警',
'Next five execution times': '接下来五次执行时间',
'Execute time': '执行时间',
'Complement range': '补数范围'
}
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册