未验证 提交 7e8b5c69 编写于 作者: B bao liang 提交者: GitHub

add en-us documents (#672)

* add quick start document

* update test

* add backend deployment document

* add frontend deployment document

* add system manual

* Supplementary translation

Translated untranslated places

* 1.0.1-release.md

add 1.0.1-release document

* 1.0.2-release.md

add 1.0.2-release document

* 1.0.3-release.md

add 1.0.3-release document

* 1.1.0-release.md

add 1.1.0-release document

* EasyScheduler-FAQ.md

add FAQ document

* Backend development documentation.md

add backend development documentation

* Upgrade documentation.md

add Upgrade documentation

* Frontend development documentation.md

add frontend development documentation
上级 012bf012
Easy Scheduler Release 1.0.1
===
Easy Scheduler 1.0.2 is the second version in the 1.x series. The update is as follows:
- 1,outlook TSL email support
- 2,servlet and protobuf jar conflict resolution
- 3,create a tenant and establish a Linux user at the same time
- 4,the re-run time is negative
- 5,stand-alone and cluster can be deployed with one click of install.sh
- 6,queue support interface added
- 7,escheduler.t_escheduler_queue added create_time and update_time fields
Easy Scheduler Release 1.0.2
===
Easy Scheduler 1.0.2 is the third version in the 1.x series. This version adds scheduling open interfaces, worker grouping (the machine group for which the specified task runs), task flow and service monitoring, and support for oracle, clickhouse, etc., as follows:
New features:
===
- [[EasyScheduler-79](https://github.com/analysys/EasyScheduler/issues/79)] scheduling the open interface through the token mode, which can be operated through the api.
- [[EasyScheduler-138](https://github.com/analysys/EasyScheduler/issues/138)] can specify the machine (group) where the task runs.
- [[EasyScheduler-139](https://github.com/analysys/EasyScheduler/issues/139)] task Process Monitoring and Master, Worker, Zookeeper Operation Status Monitoring
- [[EasyScheduler-140](https://github.com/analysys/EasyScheduler/issues/140)] workflow Definition - Increase Process Timeout Alarm
- [[EasyScheduler-134](https://github.com/analysys/EasyScheduler/issues/134)] task type supports Oracle, CLICKHOUSE, SQLSERVER, IMPALA
- [[EasyScheduler-136](https://github.com/analysys/EasyScheduler/issues/136)] sql task node can independently select CC mail users
- [[EasyScheduler-141](https://github.com/analysys/EasyScheduler/issues/141)] user Management—Users can bind queues. The user queue level is higher than the tenant queue level. If the user queue is empty, look for the tenant queue.
Enhanced:
===
- [[EasyScheduler-154](https://github.com/analysys/EasyScheduler/issues/154)] Tenant code allows encoding of pure numbers or underscores
Repair:
===
- [[EasyScheduler-135](https://github.com/analysys/EasyScheduler/issues/135)] Python task can specify python version
- [[EasyScheduler-125](https://github.com/analysys/EasyScheduler/issues/125)] The mobile phone number in the user account does not recognize the opening of Unicom's latest number 166
- [[EasyScheduler-178](https://github.com/analysys/EasyScheduler/issues/178)] Fix subtle spelling mistakes in ProcessDao
- [[EasyScheduler-129](https://github.com/analysys/EasyScheduler/issues/129)] Tenant code, underlined and other special characters cannot pass the check.
Thank:
===
Last but not least, no new version was born without the contributions of the following partners:
Baoqi , chubbyjiang , coreychen , chgxtony, cmdares , datuzi , dingchao, fanguanqun , 风清扬, gaojun416 , googlechorme, hyperknob , hujiang75277381 , huanzui , kinssun, ivivi727 ,jimmy, jiangzhx , kevin5210 , lidongdai , lshmouse , lenboo, lyf198972 , lgcareer , lzy305 , moranrr , millionfor , mazhong8808, programlief, qiaozhanwei , roy110 , swxchappy , sherlock111 , samz406 , swxchappy, qq389401879 , lzy305, vkingnew, William-GuoWei , woniulinux, yyl861, zhangxin1988, yangjiajun2014, yangqinlong, yangjiajun2014, zhzhenqin, zhangluck, zhanghaicheng1, zhuyizhizhi
And many enthusiastic partners in the WeChat group! Thank you very much!
Easy Scheduler Release 1.0.3
===
Easy Scheduler 1.0.3 is the fourth version in the 1.x series.
Enhanced:
===
- [[EasyScheduler-482]](https://github.com/analysys/EasyScheduler/issues/482)sql task mail header added support for custom variables
- [[EasyScheduler-483]](https://github.com/analysys/EasyScheduler/issues/483)sql task failed to send mail, then this sql task is failed
- [[EasyScheduler-484]](https://github.com/analysys/EasyScheduler/issues/484)modify the replacement rule of the custom variable in the sql task, and support the replacement of multiple single quotes and double quotes.
- [[EasyScheduler-485]](https://github.com/analysys/EasyScheduler/issues/485)when creating a resource file, increase the verification that the resource file already exists on hdfs
Repair:
===
- [[EasyScheduler-198]](https://github.com/analysys/EasyScheduler/issues/198) the process definition list is sorted according to the timing status and update time
- [[EasyScheduler-419]](https://github.com/analysys/EasyScheduler/issues/419) fixes online creation of files, hdfs file is not created, but returns successfully
- [[EasyScheduler-481] ](https://github.com/analysys/EasyScheduler/issues/481)fixes the problem that the job does not exist at the same time.
- [[EasyScheduler-425]](https://github.com/analysys/EasyScheduler/issues/425) kills the kill of its child process when killing the task
- [[EasyScheduler-422]](https://github.com/analysys/EasyScheduler/issues/422) fixed an issue where the update time and size were not updated when updating resource files
- [[EasyScheduler-431]](https://github.com/analysys/EasyScheduler/issues/431) fixed an issue where deleting a tenant failed if hdfs was not started when the tenant was deleted
- [[EasyScheduler-485]](https://github.com/analysys/EasyScheduler/issues/486) the shell process exits, the yarn state is not final and waits for judgment.
Thank:
===
Last but not least, no new version was born without the contributions of the following partners:
Baoqi, jimmy201602, samz406, petersear, millionfor, hyperknob, fanguanqun, yangqinlong, qq389401879,
feloxx, coding-now, hymzcn, nysyxxg, chgxtony
And many enthusiastic partners in the WeChat group! Thank you very much!
Easy Scheduler Release 1.1.0
===
Easy Scheduler 1.1.0 is the first release in the 1.1.x series.
New features:
===
- [[EasyScheduler-391](https://github.com/analysys/EasyScheduler/issues/391)] run a process under a specified tenement user
- [[EasyScheduler-288](https://github.com/analysys/EasyScheduler/issues/288)] feature/qiye_weixin
- [[EasyScheduler-189](https://github.com/analysys/EasyScheduler/issues/189)] security support such as Kerberos
- [[EasyScheduler-398](https://github.com/analysys/EasyScheduler/issues/398)]dministrator, with tenants (install.sh set default tenant), can create resources, projects and data sources (limited to one administrator)
- [[EasyScheduler-293](https://github.com/analysys/EasyScheduler/issues/293)]click on the parameter selected when running the process, there is no place to view, no save
- [[EasyScheduler-401](https://github.com/analysys/EasyScheduler/issues/401)]timing is easy to time every second. After the timing is completed, you can display the next trigger time on the page.
- [[EasyScheduler-493](https://github.com/analysys/EasyScheduler/pull/493)]add datasource kerberos auth and FAQ modify and add resource upload s3
Enhanced:
===
- [[EasyScheduler-227](https://github.com/analysys/EasyScheduler/issues/227)] upgrade spring-boot to 2.1.x and spring to 5.x
- [[EasyScheduler-434](https://github.com/analysys/EasyScheduler/issues/434)] number of worker nodes zk and mysql are inconsistent
- [[EasyScheduler-435](https://github.com/analysys/EasyScheduler/issues/435)]authentication of the mailbox format
- [[EasyScheduler-441](https://github.com/analysys/EasyScheduler/issues/441)] prohibits running nodes from joining completed node detection
- [[EasyScheduler-400](https://github.com/analysys/EasyScheduler/issues/400)] Home page, queue statistics are not harmonious, command statistics have no data
- [[EasyScheduler-395](https://github.com/analysys/EasyScheduler/issues/395)] For fault-tolerant recovery processes, the status cannot be ** is running
- [[EasyScheduler-529](https://github.com/analysys/EasyScheduler/issues/529)] optimize poll task from zookeeper
- [[EasyScheduler-242](https://github.com/analysys/EasyScheduler/issues/242)]worker-server node gets task performance problem
- [[EasyScheduler-352](https://github.com/analysys/EasyScheduler/issues/352)]worker grouping, queue consumption problem
- [[EasyScheduler-461](https://github.com/analysys/EasyScheduler/issues/461)]view data source parameters, need to encrypt account password information
- [[EasyScheduler-396](https://github.com/analysys/EasyScheduler/issues/396)]Dockerfile optimization, and associated Dockerfile and github to achieve automatic mirroring
- [[EasyScheduler-389](https://github.com/analysys/EasyScheduler/issues/389)]service monitor cannot find the change of master/worker
- [[EasyScheduler-511](https://github.com/analysys/EasyScheduler/issues/511)]support recovery process from stop/kill nodes.
- [[EasyScheduler-399](https://github.com/analysys/EasyScheduler/issues/399)]HadoopUtils specifies user actions instead of **Deploying users
Repair:
===
- [[EasyScheduler-394](https://github.com/analysys/EasyScheduler/issues/394)] When the master&worker is deployed on the same machine, if the master&worker service is restarted, the previously scheduled tasks cannot be scheduled.
- [[EasyScheduler-469](https://github.com/analysys/EasyScheduler/issues/469)]Fix naming errors,monitor page
- [[EasyScheduler-392](https://github.com/analysys/EasyScheduler/issues/392)]Feature request: fix email regex check
- [[EasyScheduler-405](https://github.com/analysys/EasyScheduler/issues/405)]timed modification/addition page, start time and end time cannot be the same
- [[EasyScheduler-517](https://github.com/analysys/EasyScheduler/issues/517)]complement - subworkflow - time parameter
- [[EasyScheduler-532](https://github.com/analysys/EasyScheduler/issues/532)] python node does not execute the problem
- [[EasyScheduler-543](https://github.com/analysys/EasyScheduler/issues/543)]optimize datasource connection params safety
- [[EasyScheduler-569](https://github.com/analysys/EasyScheduler/issues/569)] timed tasks can't really stop
- [[EasyScheduler-463](https://github.com/analysys/EasyScheduler/issues/463)]mailbox verification does not support very suffixed mailboxes
Thank:
===
Last but not least, no new version was born without the contributions of the following partners:
Baoqi, jimmy201602, samz406, petersear, millionfor, hyperknob, fanguanqun, yangqinlong, qq389401879, chgxtony, Stanfan, lfyee, thisnew, hujiang75277381, sunnyingit, lgbo-ustc, ivivi, lzy305, JackIllkid, telltime, lipengbo2018, wuchunfu, telltime
And many enthusiastic partners in the WeChat group! Thank you very much!
# Backend Deployment Document
There are two deployment modes for the backend:
- 1. automatic deployment
- 2. source code compile and then deployment
## 1、Preparations
Download the latest version of the installation package, download address: [gitee download](https://gitee.com/easyscheduler/EasyScheduler/attach_files/) , download escheduler-backend-x.x.x.tar.gz(back-end referred to as escheduler-backend),escheduler-ui-x.x.x.tar.gz(front-end referred to as escheduler-ui)
#### Preparations 1: Installation of basic software (self-installation of required items)
* [Mysql](http://geek.analysys.cn/topic/124) (5.5+) : Mandatory
* [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) : Mandatory
* [ZooKeeper](https://www.jianshu.com/p/de90172ea680)(3.4.6+) :Mandatory
* [Hadoop](https://blog.csdn.net/Evankaka/article/details/51612437)(2.6+) :Optionally, if you need to use the resource upload function, MapReduce task submission needs to configure Hadoop (uploaded resource files are currently stored on Hdfs)
* [Hive](https://staroon.pro/2017/12/09/HiveInstall/)(1.2.1) : Optional, hive task submission needs to be installed
* Spark(1.x,2.x) : Optional, Spark task submission needs to be installed
* PostgreSQL(8.2.15+) : Optional, PostgreSQL PostgreSQL stored procedures need to be installed
```
Note: Easy Scheduler itself does not rely on Hadoop, Hive, Spark, PostgreSQL, but only calls their Client to run the corresponding tasks.
```
#### Preparations 2: Create deployment users
- Deployment users are created on all machines that require deployment scheduling, because the worker service executes jobs in sudo-u {linux-user}, so deployment users need sudo privileges and are confidential.
```Deployment account
vi /etc/sudoers
# For example, the deployment user is an escheduler account
escheduler ALL=(ALL) NOPASSWD: NOPASSWD: ALL
# And you need to comment out the Default requiretty line
#Default requiretty
```
#### Preparations 3: SSH Secret-Free Configuration
Configure SSH secret-free login on deployment machines and other installation machines. If you want to install easyscheduler on deployment machines, you need to configure native password-free login itself.
- [Connect the host and other machines SSH](http://geek.analysys.cn/topic/113)
#### Preparations 4: database initialization
* Create databases and accounts
Enter the mysql command line service by following MySQL commands:
> mysql -h {host} -u {user} -p{password}
Then execute the following command to create database and account
```sql
CREATE DATABASE escheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL PRIVILEGES ON escheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
GRANT ALL PRIVILEGES ON escheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
flush privileges;
```
* Versions 1.0.0 and 1.0.1 create tables and import basic data
Instructions:在escheduler-backend/sql/escheduler.sql和quartz.sql
```sql
mysql -h {host} -u {user} -p{password} -D {db} < escheduler.sql
mysql -h {host} -u {user} -p{password} -D {db} < quartz.sql
```
* Version 1.0.2 later (including 1.0.2) creates tables and imports basic data
Modify the following attributes in conf/dao/data_source.properties
```
spring.datasource.url
spring.datasource.username
spring.datasource.password
```
Execute scripts for creating tables and importing basic data
```
sh ./script/create_escheduler.sh
```
#### Preparations 5: Modify the deployment directory permissions and operation parameters
Let's first get a general idea of the role of files (folders) in the escheduler-backend directory after decompression.
```directory
bin : Basic service startup script
conf : Project Profile
lib : The project relies on jar packages, including individual module jars and third-party jars
script : Cluster Start, Stop and Service Monitor Start and Stop scripts
sql : The project relies on SQL files
install.sh : One-click deployment script
```
- Modify permissions (please modify the deployUser to the corresponding deployment user) so that the deployment user has operational privileges on the escheduler-backend directory
`sudo chown -R deployUser:deployUser escheduler-backend`
- Modify the `.escheduler_env.sh` environment variable in the conf/env/directory
- Modify deployment parameters (depending on your server and business situation):
- Modify the parameters in **install.sh** to replace the values required by your business
- MonitorServerState switch variable, added in version 1.0.3, controls whether to start the self-start script (monitor master, worker status, if off-line will start automatically). The default value of "false" means that the self-start script is not started, and if it needs to start, it is changed to "true".
- hdfsStartupSate switch variable controls whether to starthdfs
The default value of "false" means not to start hdfs
If you need to start hdfs instead of "true", you need to create the hdfs root path by yourself, that is, hdfsPath in install.sh.
- If you use hdfs-related functions, you need to copy**hdfs-site.xml** and **core-site.xml** to the conf directory
## 2、Deployment
Automated deployment is recommended, and experienced partners can use source deployment as well.
### 2.1 Automated Deployment
- Install zookeeper tools
`pip install kazoo`
- Switch to deployment user, one-click deployment
`sh install.sh`
- Use the jps command to see if the service is started (jps comes with Java JDK)
```aidl
MasterServer ----- Master Service
WorkerServer ----- Worker Service
LoggerServer ----- Logger Service
ApiApplicationServer ----- API Service
AlertServer ----- Alert Service
```
If there are more than five services, the automatic deployment is successful
After successful deployment, the log can be viewed and stored in a specified folder.
```log path
logs/
├── escheduler-alert-server.log
├── escheduler-master-server.log
|—— escheduler-worker-server.log
|—— escheduler-api-server.log
|—— escheduler-logger-server.log
```
### 2.2 Compile source code to deploy
After downloading the release version of the source package, unzip it into the root directory
* Execute the compilation command:
```
mvn -U clean package assembly:assembly -Dmaven.test.skip=true
```
* View directory
After normal compilation, target/escheduler-{version}/ is generated in the current directory
### 2.3 Start-and-stop services commonly used in systems (for service purposes, please refer to System Architecture Design for details)
* stop all services in the cluster at one click
` sh ./bin/stop_all.sh`
* one click to open all services in the cluster
` sh ./bin/start_all.sh`
* start and stop Master
```start master
sh ./bin/escheduler-daemon.sh start master-server
sh ./bin/escheduler-daemon.sh stop master-server
```
* start and stop Worker
```start worker
sh ./bin/escheduler-daemon.sh start worker-server
sh ./bin/escheduler-daemon.sh stop worker-server
```
* start and stop Api
```start Api
sh ./bin/escheduler-daemon.sh start api-server
sh ./bin/escheduler-daemon.sh stop api-server
```
* start and stop Logger
```start Logger
sh ./bin/escheduler-daemon.sh start logger-server
sh ./bin/escheduler-daemon.sh stop logger-server
```
* start and stop Alert
```start Alert
sh ./bin/escheduler-daemon.sh start alert-server
sh ./bin/escheduler-daemon.sh stop alert-server
```
## 3、Database Upgrade
Database upgrade is a function added in version 1.0.2. The database can be upgraded automatically by executing the following commands
```upgrade
sh ./script/upgrade_escheduler.sh
```
# Backend development documentation
## Environmental requirements
* [Mysql](http://geek.analysys.cn/topic/124) (5.5+) : Must be installed
* [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) : Must be installed
* [ZooKeeper](https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper)(3.4.6+) :Must be installed
* [Maven](http://maven.apache.org/download.cgi)(3.3+) :Must be installed
Because the escheduler-rpc module in EasyScheduler uses Grpc, you need to use Maven to compile the generated classes.
For those who are not familiar with Maven, please refer to: [maven in five minutes](http://maven.apache.org/guides/getting-started/maven-in-five-minutes.html)(3.3+)
http://maven.apache.org/install.html
## Project compilation
After importing the EasyScheduler source code into the development tools such as Idea, first convert to the Maven project (right click and select "Add Framework Support")
* Execute the compile command:
```
mvn -U clean package assembly:assembly -Dmaven.test.skip=true
```
* View directory
After normal compilation, it will generate target/escheduler-{version}/ in the current directory.
```
bin
conf
lib
script
sql
install.sh
```
- Description
```
bin : basic service startup script
conf : project configuration file
lib : the project depends on the jar package, including the various module jars and third-party jars
script : cluster start, stop, and service monitoring start and stop scripts
sql : project depends on sql file
install.sh : one-click deployment script
```
## Q: EasyScheduler service introduction and recommended running memory
A: EasyScheduler consists of 5 services, MasterServer, WorkerServer, ApiServer, AlertServer, LoggerServer and UI.
| Service | Description |
| ------------------------- | ------------------------------------------------------------ |
| MasterServer | Mainly responsible for DAG segmentation and task status monitoring |
| WorkerServer/LoggerServer | Mainly responsible for the submission, execution and update of task status. LoggerServer is used for Rest Api to view logs through RPC |
| ApiServer | Provides the Rest Api service for the UI to call |
| AlertServer | Provide alarm service |
| UI | Front page display |
Note:**Due to the large number of services, it is recommended that the single-machine deployment is preferably 4 cores and 16G or more.**
---
## Q: Why can't an administrator create a project?
A: The administrator is currently "**pure management**". There is no tenant, that is, there is no corresponding user on linux, so there is no execution permission, **so there is no project, resource and data source,** so there is no permission to create. **But there are all viewing permissions**. If you need to create a business operation such as a project, **use the administrator to create a tenant and a normal user, and then use the normal user login to operate**. We will release the administrator's creation and execution permissions in version 1.1.0, and the administrator will have all permissions.
---
## Q: Which mailboxes does the system support?
A: Support most mailboxes, qq, 163, 126, 139, outlook, aliyun, etc. are supported. Support TLS and SSL protocols, optionally configured in alert.properties
---
## Q: What are the common system variable time parameters and how do I use them?
A: Please refer to https://analysys.github.io/easyscheduler_docs_cn/%E7%B3%BB%E7%BB%9F%E4%BD%BF%E7%94%A8%E6%89%8B%E5%86%8C.html#%E7%B3%BB%E7%BB%9F%E5%8F%82%E6%95%B0
---
## Q: pip install kazoo This installation gives an error. Is it necessary to install?
A: This is the python connection zookeeper needs to use, must be installed
---
## Q: How to specify the machine running task
A: Use **the administrator** to create a Worker group, **specify the Worker group** when the **process definition starts**, or **specify the Worker group on the task node**. If not specified, use Default, **Default is to select one of all the workers in the cluster to use for task submission and execution.**
---
## Q: Priority of the task
A: We also support t**he priority of processes and tasks**. Priority We have five levels of **HIGHEST, HIGH, MEDIUM, LOW and LOWEST**. **You can set the priority between different process instances, or you can set the priority of different task instances in the same process instance.** For details, please refer to the task priority design https://analysys.github.io/easyscheduler_docs_cn/%E7%B3%BB%E7%BB%9F%E6%9E%B6%E6%9E%84%E8%AE%BE%E8%AE%A1.html#%E7%B3%BB%E7%BB%9F%E6%9E%B6%E6%9E%84%E8%AE%BE%E8%AE%A1
----
## Q: Escheduler-grpc gives an error
A: Execute in the root directory: mvn -U clean package assembly:assembly -Dmaven.test.skip=true , then refresh the entire project
----
## Q: Does EasyScheduler support running on windows?
A: In theory, **only the Worker needs to run on Linux**. Other services can run normally on Windows. But it is still recommended to deploy on Linux.
-----
## Q: UI compiles node-sass prompt in linux: Error: EACCESS: permission denied, mkdir xxxx
A: Install **npm install node-sass --unsafe-perm** separately, then **npm install**
---
## Q: UI cannot log in normally.
A: 1, if it is node startup, check whether the .env API_BASE configuration under escheduler-ui is the Api Server service address.
2, If it is nginx booted and installed via **install-escheduler-ui.sh**, check if the proxy_pass configuration in **/etc/nginx/conf.d/escheduler.conf** is the Api Server service. address
 3, if the above configuration is correct, then please check if the Api Server service is normal, curl http://192.168.xx.xx:12345/escheduler/users/get-user-info, check the Api Server log, if Prompt cn.escheduler.api.interceptor.LoginHandlerInterceptor:[76] - session info is null, which proves that the Api Server service is normal.
4, if there is no problem above, you need to check if **server.context-path and server.port configuration** in **application.properties** is correct
---
## Q: After the process definition is manually started or scheduled, no process instance is generated.
A: 1, first **check whether the MasterServer service exists through jps**, or directly check whether there is a master service in zk from the service monitoring.
​ 2,If there is a master service, check **the command status statistics** or whether new records are added in **t_escheduler_error_command**. If it is added, **please check the message field.**
---
## Q : The task status is always in the successful submission status.
A: 1, **first check whether the WorkerServer service exists through jps**, or directly check whether there is a worker service in zk from the service monitoring.
​ 2,If the **WorkerServer** service is normal, you need to **check whether the MasterServer puts the task task in the zk queue. You need to check whether the task is blocked in the MasterServer log and the zk queue.**
​ 3, if there is no problem above, you need to locate whether the Worker group is specified, but **the machine grouped by the worker is not online**.**
---
## Q: Is there a Docker image and a Dockerfile?
A: Provide Docker image and Dockerfile.
Docker image address: https://hub.docker.com/r/escheduler/escheduler_images
Dockerfile address: https://github.com/qiaozhanwei/escheduler_dockerfile/tree/master/docker_escheduler
------
## Q : Need to pay attention to the problem in install.sh
A: 1, if the replacement variable contains special characters, **use the \ transfer character to transfer**
​ 2, installPath="/data1_1T/escheduler", **this directory can not be the same as the install.sh directory currently installed with one click.**
​ 3, deployUser = "escheduler", **the deployment user must have sudo privileges**, because the worker is executed by sudo -u tenant sh xxx.command
​ 4, monitorServerState = "false", whether the service monitoring script is started, the default is not to start the service monitoring script. **If the service monitoring script is started, the master and worker services are monitored every 5 minutes, and if the machine is down, it will automatically restart.**
​ 5, hdfsStartupSate="false", whether to enable HDFS resource upload function. The default is not enabled. **If it is not enabled, the resource center cannot be used.** If enabled, you need to configure the configuration of fs.defaultFS and yarn in conf/common/hadoop/hadoop.properties. If you use namenode HA, you need to copy core-site.xml and hdfs-site.xml to the conf root directory.
​ Note: **The 1.0.x version does not automatically create the hdfs root directory, you need to create it yourself, and you need to deploy the user with hdfs operation permission.**
---
## Q : Process definition and process instance offline exception
A : For **versions prior to 1.0.4**, modify the code under the escheduler-api cn.escheduler.api.quartz package.
```
public boolean deleteJob(String jobName, String jobGroupName) {
lock.writeLock().lock();
try {
JobKey jobKey = new JobKey(jobName,jobGroupName);
if(scheduler.checkExists(jobKey)){
logger.info("try to delete job, job name: {}, job group name: {},", jobName, jobGroupName);
return scheduler.deleteJob(jobKey);
}else {
return true;
}
} catch (SchedulerException e) {
logger.error(String.format("delete job : %s failed",jobName), e);
} finally {
lock.writeLock().unlock();
}
return false;
}
```
---
## Q: Can the tenant created before the HDFS startup use the resource center normally?
A: No. Because the tenant created by HDFS is not started, the tenant directory will not be registered in HDFS. So the last resource will report an error.
## Q: In the multi-master and multi-worker state, the service is lost, how to be fault-tolerant
A: **Note:** **Master monitors Master and Worker services.**
​ 1,If the Master service is lost, other Masters will take over the process of the hanged Master and continue to monitor the Worker task status.
​ 2,If the Worker service is lost, the Master will monitor that the Worker service is gone. If there is a Yarn task, the Kill Yarn task will be retried.
Please see the fault-tolerant design for details:https://analysys.github.io/easyscheduler_docs_cn/%E7%B3%BB%E7%BB%9F%E6%9E%B6%E6%9E%84%E8%AE%BE%E8%AE%A1.html#%E7%B3%BB%E7%BB%9F%E6%9E%B6%E6%9E%84%E8%AE%BE%E8%AE%A1
---
## Q : Fault tolerance for a machine distributed by Master and Worker
A: The 1.0.3 version only implements the fault tolerance of the Master startup process, and does not take the Worker Fault Tolerance. That is to say, if the Worker hangs, no Master exists. There will be problems with this process. We will add Master and Worker startup fault tolerance in version **1.1.0** to fix this problem. If you want to manually modify this problem, you need to **modify the running task for the running worker task that is running the process across the restart and has been dropped. The running process is set to the failed state across the restart**. Then resume the process from the failed node.
---
## Q : Timing is easy to set to execute every second
A : Note when setting the timing. If the first digit (* * * * * ? *) is set to *, it means execution every second. **We will add a list of recently scheduled times in version 1.1.0.** You can see the last 5 running times online at http://cron.qqe2.com/
## Q: Is there a valid time range for timing?
A: Yes, **if the timing start and end time is the same time, then this timing will be invalid timing. If the end time of the start and end time is smaller than the current time, it is very likely that the timing will be automatically deleted.**
## Q : There are several implementations of task dependencies
A: 1, the task dependency between **DAG**, is **from the zero degree** of the DAG segmentation
​ 2, there are **task dependent nodes**, you can achieve cross-process tasks or process dependencies, please refer to the (DEPENDENT) node:https://analysys.github.io/easyscheduler_docs_cn/%E7%B3%BB%E7%BB%9F%E4%BD%BF%E7%94%A8%E6%89%8B%E5%86%8C.html#%E4%BB%BB%E5%8A%A1%E8%8A%82%E7%82%B9%E7%B1%BB%E5%9E%8B%E5%92%8C%E5%8F%82%E6%95%B0%E8%AE%BE%E7%BD%AE
​ Note: **Cross-project processes or task dependencies are not supported**
## Q: There are several ways to start the process definition.
A: 1, in **the process definition list**, click the **Start** button.
​ 2, **the process definition list adds a timer**, scheduling start process definition.
​ 3, process definition **view or edit** the DAG page, any **task node right click** Start process definition.
​ 4, you can define DAG editing for the process, set the running flag of some tasks to **prohibit running**, when the process definition is started, the connection of the node will be removed from the DAG.
## Q : Python task setting Python version
A: 1,**for the version after 1.0.3** only need to modify PYTHON_HOME in conf/env/.escheduler_env.sh
```
export PYTHON_HOME=/bin/python
```
Note: This is **PYTHON_HOME** , which is the absolute path of the python command, not the simple PYTHON_HOME. Also note that when exporting the PATH, you need to directly
```
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH
```
​ 2,For versions prior to 1.0.3, the Python task only supports the Python version of the system. It does not support specifying the Python version.
## Q:Worker Task will generate a child process through sudo -u tenant sh xxx.command, will kill when kill
A: We will add the kill task in 1.0.4 and kill all the various child processes generated by the task.
## Q : How to use the queue in EasyScheduler, what does the user queue and tenant queue mean?
A : The queue in the EasyScheduler can be configured on the user or the tenant. **The priority of the queue specified by the user is higher than the priority of the tenant queue.** For example, to specify a queue for an MR task, the queue is specified by mapreduce.job.queuename.
Note: When using the above method to specify the queue, the MR uses the following methods:
```
Configuration conf = new Configuration();
GenericOptionsParser optionParser = new GenericOptionsParser(conf, args);
String[] remainingArgs = optionParser.getRemainingArgs();
```
If it is a Spark task --queue mode specifies the queue
## Q : Master or Worker reports the following alarm
<p align="center">
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_worker_lack_res.png" width="60%" />
</p>
A : Change the value of master.properties **master.reserved.memory** under conf to a smaller value, say 0.1 or the value of worker.properties **worker.reserved.memory** is a smaller value, say 0.1
## Q: The hive version is 1.1.0+cdh5.15.0, and the SQL hive task connection is reported incorrectly.
<p align="center">
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/cdh_hive_error.png" width="60%" />
</p>
A : Will hive pom
```
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>2.1.0</version>
</dependency>
```
change into
```
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>1.1.0</version>
</dependency>
```
# Front End Deployment Document
The front-end has three deployment modes: automated deployment, manual deployment and compiled source deployment.
## 1、Preparations
#### Download the installation package
Please download the latest version of the installation package, download address: [gitee](https://gitee.com/easyscheduler/EasyScheduler/attach_files/)
After downloading escheduler-ui-x.x.x.tar.gz,decompress`tar -zxvf escheduler-ui-x.x.x.tar.gz ./`and enter the`escheduler-ui`directory
## 2、Deployment
Automated deployment is recommended for either of the following two ways
### 2.1 Automated Deployment
Edit the installation file`vi install-escheduler-ui.sh` in the` escheduler-ui` directory
Change the front-end access port and the back-end proxy interface address
```
# Configure the front-end access port
esc_proxy="8888"
# Configure proxy back-end interface
esc_proxy_port="http://192.168.xx.xx:12345"
```
>Front-end automatic deployment based on Linux system`yum`operation, before deployment, please install and update`yum`
under this directory, execute`./install-escheduler-ui.sh`
### 2.2 Manual Deployment
Install epel source `yum install epel-release -y`
Install Nginx `yum install nginx -y`
> #### Nginx configuration file address
```
/etc/nginx/conf.d/default.conf
```
> #### Configuration information (self-modifying)
```
server {
listen 8888;# access port
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
root /xx/dist; # the dist directory address decompressed by the front end above (self-modifying)
index index.html index.html;
}
location /escheduler {
proxy_pass http://192.168.xx.xx:12345; # nterface address (self-modifying)
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header x_real_ipP $remote_addr;
proxy_set_header remote_addr $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_connect_timeout 4s;
proxy_read_timeout 30s;
proxy_send_timeout 12s;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
```
> #### Restart the Nginx service
```
systemctl restart nginx
```
#### nginx command
- enable `systemctl enable nginx`
- restart `systemctl restart nginx`
- status `systemctl status nginx`
## Front-end Frequently Asked Questions
#### 1.Upload file size limit
Edit the configuration file `vi /etc/nginx/nginx.conf`
```
# change upload size
client_max_body_size 1024m
```
# Front-end development documentation
### Technical selection
```
Vue mvvm framework
Es6 ECMAScript 6.0
Ans-ui Analysys-ui
D3 Visual Library Chart Library
Jsplumb connection plugin library
Lodash high performance JavaScript utility library
```
### Development environment
- #### Node installation
Node package download (note version 8.9.4) `https://nodejs.org/download/release/v8.9.4/`
- #### Front-end project construction
Use the command line mode `cd` enter the `escheduler-ui` project directory and execute `npm install` to pull the project dependency package.
> If `npm install` is very slow
> You can enter the Taobao image command line to enter `npm install -g cnpm --registry=https://registry.npm.taobao.org`
> Run `cnpm install`
- Create a new `.env` file or the interface that interacts with the backend
Create a new` .env` file in the `escheduler-ui `directory, add the ip address and port of the backend service to the file, and use it to interact with the backend. The contents of the` .env` file are as follows:
```
# Proxy interface address (modified by yourself)
API_BASE = http://192.168.xx.xx:12345
# If you need to access the project with ip, you can remove the "#" (example)
#DEV_HOST = 192.168.xx.xx
```
> ##### ! ! ! Special attention here. If the project reports a "node-sass error" error while pulling the dependency package, execute the following command again after execution.
```
npm install node-sass --unsafe-perm //单独安装node-sass依赖
```
- #### Development environment operation
- `npm start` project development environment (after startup address http://localhost:8888/#/)
#### Front-end project release
- `npm run build` project packaging (after packaging, the root directory will create a folder called dist for publishing Nginx online)
Run the `npm run build` command to generate a package file (dist) package
Copy it to the corresponding directory of the server (front-end service static page storage directory)
Visit address` http://localhost:8888/#/`
#### Start with node and daemon under Liunx
Install pm2 `npm install -g pm2`
Execute `pm2 start npm -- run dev` to start the project in the project `escheduler-ui `root directory
#### command
- Start `pm2 start npm -- run dev`
- Stop `pm2 stop npm`
- delete `pm2 delete npm`
- Status `pm2 list`
```
[root@localhost escheduler-ui]# pm2 start npm -- run dev
[PM2] Applying action restartProcessId on app [npm](ids: 0)
[PM2] [npm](0) ✓
[PM2] Process successfully started
┌──────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬─────┬──────────┬──────┬──────────┐
│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├──────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼─────┼──────────┼──────┼──────────┤
│ npm │ 0 │ N/A │ fork │ 6168 │ online │ 31 │ 0s │ 0% │ 5.6 MB │ root │ disabled │
└──────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴─────┴──────────┴──────┴──────────┘
Use `pm2 show <id|name>` to get more details about an app
```
### Project directory structure
`build` some webpack configurations for packaging and development environment projects
`node_modules` development environment node dependency package
`src` project required documents
`src => combo` project third-party resource localization `npm run combo` specific view `build/combo.js`
`src => font` Font icon library can be added by visiting https://www.iconfont.cn Note: The font library uses its own secondary development to reintroduce its own library `src/sass/common/_font.scss`
`src => images` public image storage
`src => js` js/vue
`src => lib` internal components of the company (company component library can be deleted after open source)
`src => sass` sass file One page corresponds to a sass file
`src => view` page file One page corresponds to an html file
```
> Projects are developed using vue single page application (SPA)
- All page entry files are in the `src/js/conf/${ corresponding page filename => home} index.js` entry file
- The corresponding sass file is in `src/sass/conf/${corresponding page filename => home}/index.scss`
- The corresponding html file is in `src/view/${corresponding page filename => home}/index.html`
```
Public module and utill `src/js/module`
`components` => internal project common components
`download` => download component
`echarts` => chart component
`filter` => filter and vue pipeline
`i18n` => internationalization
`io` => io request encapsulation based on axios
`mixin` => vue mixin public part for disabled operation
`permissions` => permission operation
`util` => tool
### System function module
Home => `http://localhost:8888/#/home`
Project Management => `http://localhost:8888/#/projects/list`
```
| Project Home
| Workflow
- Workflow definition
- Workflow instance
- Task instance
```
Resource Management => `http://localhost:8888/#/resource/file`
```
| File Management
| udf Management
- Resource Management
- Function management
```
Data Source Management => `http://localhost:8888/#/datasource/list`
Security Center => `http://localhost:8888/#/security/tenant`
```
| Tenant Management
| User Management
| Alarm Group Management
- master
- worker
```
User Center => `http://localhost:8888/#/user/account`
## Routing and state management
The project `src/js/conf/home` is divided into
`pages` => route to page directory
```
The page file corresponding to the routing address
```
`router` => route management
```
vue router, the entry file index.js in each page will be registered. Specific operations: https://router.vuejs.org/zh/
```
`store` => status management
```
The page corresponding to each route has a state management file divided into:
actions => mapActions => Details:https://vuex.vuejs.org/zh/guide/actions.html
getters => mapGetters => Details:https://vuex.vuejs.org/zh/guide/getters.html
index => entrance
mutations => mapMutations => Details:https://vuex.vuejs.org/zh/guide/mutations.html
state => mapState => Details:https://vuex.vuejs.org/zh/guide/state.html
Specific action:https://vuex.vuejs.org/zh/
```
## specification
## Vue specification
##### 1.Component name
The component is named multiple words and is connected with a wire (-) to avoid conflicts with HTML tags and a clearer structure.
```
// positive example
export default {
name: 'page-article-item'
}
```
##### 2.Component files
The internal common component of the `src/js/module/components` project writes the folder name with the same name as the file name. The subcomponents and util tools that are split inside the common component are placed in the internal `_source` folder of the component.
```
└── components
├── header
├── header.vue
└── _source
└── nav.vue
└── util.js
├── conditions
├── conditions.vue
└── _source
└── serach.vue
└── util.js
```
##### 3.Prop
When you define Prop, you should always name it in camel format (camelCase) and use the connection line (-) when assigning values to the parent component.This follows the characteristics of each language, because it is case-insensitive in HTML tags, and the use of links is more friendly; in JavaScript, the more natural is the hump name.
```
// Vue
props: {
articleStatus: Boolean
}
// HTML
<article-item :article-status="true"></article-item>
```
The definition of Prop should specify its type, defaults, and validation as much as possible.
Example:
```
props: {
attrM: Number,
attrA: {
type: String,
required: true
},
attrZ: {
type: Object,
// The default value of the array/object should be returned by a factory function
default: function () {
return {
msg: 'achieve you and me'
}
}
},
attrE: {
type: String,
validator: function (v) {
return !(['success', 'fail'].indexOf(v) === -1)
}
}
}
```
##### 4.v-for
When performing v-for traversal, you should always bring a key value to make rendering more efficient when updating the DOM.
```
<ul>
<li v-for="item in list" :key="item.id">
{{ item.title }}
</li>
</ul>
```
v-for should be avoided on the same element as v-if (`for example: <li>`) because v-for has a higher priority than v-if. To avoid invalid calculations and rendering, you should try to use v-if Put it on top of the container's parent element.
```
<ul v-if="showList">
<li v-for="item in list" :key="item.id">
{{ item.title }}
</li>
</ul>
```
##### 5.v-if / v-else-if / v-else
If the elements in the same set of v-if logic control are logically identical, Vue reuses the same part for more efficient element switching, `such as: value`. In order to avoid the unreasonable effect of multiplexing, you should add key to the same element for identification.
```
<div v-if="hasData" key="mazey-data">
<span>{{ mazeyData }}</span>
</div>
<div v-else key="mazey-none">
<span>no data</span>
</div>
```
##### 6.Instruction abbreviation
In order to unify the specification, the instruction abbreviation is always used. Using `v-bind`, `v-on` is not bad. Here is only a unified specification.
```
<input :value="mazeyUser" @click="verifyUser">
```
##### 7.Top-level element order of single file components
Styles are packaged in a file, all the styles defined in a single vue file, the same name in other files will also take effect. All will have a top class name before creating a component.
Note: The sass plugin has been added to the project, and the sas syntax can be written directly in a single vue file.
For uniformity and ease of reading, they should be placed in the order of `<template>``<script>``<style>`.
```
<template>
<div class="test-model">
test
</div>
</template>
<script>
export default {
name: "test",
data() {
return {}
},
props: {},
methods: {},
watch: {},
beforeCreate() {
},
created() {
},
beforeMount() {
},
mounted() {
},
beforeUpdate() {
},
updated() {
},
beforeDestroy() {
},
destroyed() {
},
computed: {},
components: {},
}
</script>
<style lang="scss" rel="stylesheet/scss">
.test-model {
}
</style>
```
## JavaScript specification
##### 1.var / let / const
It is recommended to no longer use var, but use let / const, prefer const. The use of any variable must be declared in advance, except that the function defined by function can be placed anywhere.
##### 2.quotes
```
const foo = 'after division'
const bar = `${foo},ront-end engineer`
```
##### 3.function
Anonymous functions use the arrow function uniformly. When multiple parameters/return values are used, the object's structure assignment is used first.
```
function getPersonInfo ({name, sex}) {
// ...
return {name, gender}
}
```
The function name is uniformly named with a camel name. The beginning of the capital letter is a constructor. The lowercase letters start with ordinary functions, and the new operator should not be used to operate ordinary functions.
##### 4.object
```
const foo = {a: 0, b: 1}
const bar = JSON.parse(JSON.stringify(foo))
const foo = {a: 0, b: 1}
const bar = {...foo, c: 2}
const foo = {a: 3}
Object.assign(foo, {b: 4})
const myMap = new Map([])
for (let [key, value] of myMap.entries()) {
// ...
}
```
##### 5.module
Unified management of project modules using import / export.
```
// lib.js
export default {}
// app.js
import app from './lib'
```
Import is placed at the top of the file.
If the module has only one output value, use `export default`,otherwise no.
## HTML / CSS
##### 1.Label
Do not write the type attribute when referencing external CSS or JavaScript. The HTML5 default type is the text/css and text/javascript properties, so there is no need to specify them.
```
<link rel="stylesheet" href="//www.test.com/css/test.css">
<script src="//www.test.com/js/test.js"></script>
```
##### 2.Naming
The naming of Class and ID should be semantic, and you can see what you are doing by looking at the name; multiple words are connected by a link.
```
// positive example
.test-header{
font-size: 20px;
}
```
##### 3.Attribute abbreviation
CSS attributes use abbreviations as much as possible to improve the efficiency and ease of understanding of the code.
```
// counter example
border-width: 1px;
border-style: solid;
border-color: #ccc;
// positive example
border: 1px solid #ccc;
```
##### 4.Document type
The HTML5 standard should always be used.
```
<!DOCTYPE html>
```
##### 5.Notes
A block comment should be written to a module file.
```
/**
* @module mazey/api
* @author Mazey <mazey@mazey.net>
* @description test.
* */
```
## interface
##### All interfaces are returned as Promise
Note that non-zero is wrong for catching catch
```
const test = () => {
return new Promise((resolve, reject) => {
resolve({
a:1
})
})
}
// transfer
test.then(res => {
console.log(res)
// {a:1}
})
```
Normal return
```
{
code:0,
data:{}
msg:'success'
}
```
错误返回
```
{
code:10000,
data:{}
msg:'failed'
}
```
##### Related interface path
dag related interface `src/js/conf/home/store/dag/actions.js`
Data Source Center Related Interfaces `src/js/conf/home/store/datasource/actions.js`
Project Management Related Interfaces `src/js/conf/home/store/projects/actions.js`
Resource Center Related Interfaces `src/js/conf/home/store/resource/actions.js`
Security Center Related Interfaces `src/js/conf/home/store/security/actions.js`
User Center Related Interfaces `src/js/conf/home/store/user/actions.js`
## Extended development
##### 1.Add node
(1) First place the icon icon of the node in the `src/js/conf/home/pages/dag/img `folder, and note the English name of the node defined by the `toolbar_${in the background. For example: SHELL}.png`
(2) Find the `tasksType` object in `src/js/conf/home/pages/dag/_source/config.js` and add it to it.
```
'DEPENDENT': { // The background definition node type English name is used as the key value
desc: 'DEPENDENT', // tooltip desc
color: '#2FBFD8' // The color represented is mainly used for tree and gantt
}
```
(3) Add a `${node type (lowercase)}`.vue file in `src/js/conf/home/pages/dag/_source/formModel/tasks`. The contents of the components related to the current node are written here. Must belong to a node component must have a function _verification () After the verification is successful, the relevant data of the current component is thrown to the parent component.
```
/**
* Verification
*/
_verification () {
// datasource subcomponent verification
if (!this.$refs.refDs._verifDatasource()) {
return false
}
// verification function
if (!this.method) {
this.$message.warning(`${i18n.$t('Please enter method')}`)
return false
}
// localParams subcomponent validation
if (!this.$refs.refLocalParams._verifProp()) {
return false
}
// store
this.$emit('on-params', {
type: this.type,
datasource: this.datasource,
method: this.method,
localParams: this.localParams
})
return true
}
```
(4) Common components used inside the node component are under` _source`, and `commcon.js` is used to configure public data.
##### 2.Increase the status type
(1) Find the `tasksState` object in `src/js/conf/home/pages/dag/_source/config.js` and add it to it.
```
'WAITTING_DEPEND': { // 'WAITTING_DEPEND': { //后端定义状态类型 前端用作key值
id: 11, // front-end definition id is used as a sort
desc: `${i18n.$t('waiting for dependency')}`, // tooltip desc
color: '#5101be', // The color represented is mainly used for tree and gantt
icoUnicode: '&#xe68c;', // font icon
isSpin: false // whether to rotate (requires code judgment)
}
```
##### 3.Add the action bar tool
(1) Find the `toolOper` object in `src/js/conf/home/pages/dag/_source/config.js` and add it to it.
```
{
code: 'pointer', // tool identifier
icon: '&#xe781;', // tool icon
disable: disable, // disable
desc: `${i18n.$t('Drag node and selected item')}` // tooltip desc
}
```
(2) Tool classes are returned as a constructor `src/js/conf/home/pages/dag/_source/plugIn`
`downChart.js` => dag image download processing
`dragZoom.js` => mouse zoom effect processing
`jsPlumbHandle.js` => drag and drop line processing
`util.js` => belongs to the `plugIn` tool class
The operation is handled in the `src/js/conf/home/pages/dag/_source/dag.js` => `toolbarEvent` event.
##### 3.Add a routing page
(1) First add a routing address`src/js/conf/home/router/index.js` in route management
```
routing address{
path: '/test', // routing address
name: 'test', // alias
component: resolve => require(['../pages/test/index'], resolve), // route corresponding component entry file
meta: {
title: `${i18n.$t('test')} - EasyScheduler` // title display
}
},
```
(2)Create a `test` folder in `src/js/conf/home/pages` and create an `index.vue `entry file in the folder.
This will give you direct access to`http://localhost:8888/#/test`
##### 4.Increase the preset mailbox
Find the `src/lib/localData/email.js` startup and timed email address input to automatically pull down the match.
```
export default ["test@analysys.com.cn","test1@analysys.com.cn","test3@analysys.com.cn"]
```
##### 5.Authority management and disabled state processing
The permission gives the userType according to the backUser interface `getUserInfo` interface: `"ADMIN_USER/GENERAL_USER" `permission to control whether the page operation button is `disabled`.
specific operation:`src/js/module/permissions/index.js`
disabled processing:`src/js/module/mixin/disabledState.js`
# Quick Start
* Administrator user login
> Address:192.168.xx.xx:8888 Username and password:admin/escheduler123
<p align="center">
<img src="https://user-images.githubusercontent.com/48329107/61701549-ee738000-ad70-11e9-8d75-87ce04a0152f.png" width="60%" />
</p>
* Create queue
<p align="center">
<img src="https://user-images.githubusercontent.com/48329107/61701943-896c5a00-ad71-11e9-99b8-a279762f1bc8.png" width="60%" />
</p>
* Create tenant
<p align="center">
<img src="https://user-images.githubusercontent.com/48329107/61702051-bb7dbc00-ad71-11e9-86e1-1c328cafe916.png" width="60%" />
</p>
* Creating Ordinary Users
<p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61704402-3517a900-ad76-11e9-865a-6325041d97e2.png" width="60%" />
</p>
* Create an alarm group
<p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61704553-845dd980-ad76-11e9-85f1-05f33111409e.png" width="60%" />
</p>
* Log in with regular users
> Click on the user name in the upper right corner to "exit" and re-use the normal user login.
* Project Management - > Create Project - > Click on Project Name
<p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61704688-dd2d7200-ad76-11e9-82ee-0833b16bd88f.png" width="60%" />
</p>
* Click Workflow Definition - > Create Workflow Definition - > Online Process Definition
<p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61705638-c425c080-ad78-11e9-8619-6c21b61a24c9.png" width="60%" />
</p>
* Running Process Definition - > Click Workflow Instance - > Click Process Instance Name - > Double-click Task Node - > View Task Execution Log
<p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61705356-34801200-ad78-11e9-8d60-9b7494231028.png" width="60%" />
</p>
此差异已折叠。
# EasyScheduler upgrade documentation
## 1. Back up the previous version of the file and database
## 2. Stop all services of escheduler
`sh ./script/stop_all.sh`
## 3. Download the new version of the installation package
- [gitee](https://gitee.com/easyscheduler/EasyScheduler/attach_files), download the latest version of the front and rear installation package (backend referred to as escheduler-backend, front end referred to as escheduler-ui)
- The following upgrade operations need to be performed in the new version of the directory
## 4. Database upgrade
- Modify the following properties in conf/dao/data_source.properties
```
spring.datasource.url
spring.datasource.username
spring.datasource.password
```
- Execute database upgrade script
`sh ./script/upgrade_escheduler.sh`
## 5. Backend service upgrade
- Modify the contents of the install.sh configuration and execute the upgrade script
`sh install.sh`
## 6. Frontend service upgrade
- Overwrite the previous version of the dist directory
- Restart the nginx service
`systemctl restart nginx`
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册