提交 a5277827 编写于 作者: B bao liang 提交者: qiaozhanwei

merge dev-db to dev (#1426)

* [dolphinscheduler-1345] [newfeature] Add DB2 Datasource (#1391)

* Fix the problem that the 'queueId' is not present when creating a tenant based on the default queue. (#1409)

* dolphinscheduler-#1403][bug]improve the check rules (#1408)

1. When check failed, we don’t know whitch parameter has is wrong, Because username、password、email and phone were checks together. I refactored the check method ,Now it will return failed msg by each field.
2. The email check regex support [_|\-|\.]?) in createUser.vue, But it do not support in backend server , I fix it, Now they have the same check regex both in frontend and backend

* jcip-annotations define version information and maven-assembly-plugin add groupId (#1413)

* "v-for" add key (#1419)

* [dolphinscheduler-#1397] [bug]Resources can not be previewed or updated  (#1406)

When create an resource the name will add the suffix, But When rename the resource there is no suffix add to the name, So When update resource name without suffix just like "test.sh" => "test" , Then the bug reproduced.

To fix this bug i add the logic bellow:
When rename, if the name without suffix then add it ,else use the origin name

* simply server module configs (#1424)

* move updateTaskState into try/catch block in case of exception

* fix NPE

* using conf.getInt instead of getString

* for AbstractZKClient, remove the log, for it will print the same log message in createZNodePath.
for AlertDao, correct the spelling.

* duplicate

* refactor getTaskWorkerGroupId

* add friendly log

* update hearbeat thread num = 1

* fix the bug when worker execute task using queue. and remove checking Tenant user anymore in TaskScheduleThread

* 1. move verifyTaskInstanceIsNull after taskInstance
2. keep verifyTenantIsNull/verifyTaskInstanceIsNull clean and readable

* fix the message

* delete before check to avoid KeeperException$NoNodeException

* fix the message

* check processInstance state before delete tenant

* check processInstance state before delete worker group

* refactor

* merge api constants into common constatns

* update the resource perm

* update the dataSource perm

* fix CheckUtils.checkUserParams method

* update AlertGroupService, extends from BaseService, remove duplicate methods

* refactor

* modify method name

* add hasProjectAndPerm method

* using checkProject instead of getResultStatus

* delete checkAuth method, using hasProjectAndPerm instead.

* correct spelling

* add transactional for deleteWorkerGroupById

* add Transactional for deleteProcessInstanceById method

* change sqlSessionTemplate singleton

* change sqlSessionTemplate singleton and reformat code

* fix unsuitable error message

* update shutdownhook methods

* fix worker log bug

* fix api server debug mode bug

* upgrade zk version

* delete this line ,for zkClient.close() will do the whole thing

* fix master server shutdown error

* degrade zk version and add FourLetterWordMain class

* fix PathChildrenCache not close

* add Transactional for createSession method

* add more message for java-doc

* delete App, let spring manage connectionFactory

* add license

* add class Application for test support

* refactor masterServer and workerServer

* add args

* fix the spring transaction not work bug

* remove author

* delete @Bean annotation

* delete master/worker properties

* updates

* rename application.properties to application-dao.properties

* delete this class

* delete master/worker properties and  refactory master/worker

* delete unused imports

* merge

* delete unused config
上级 18e2f753
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# master execute thread num
master.exec.threads=100
# master execute task number in parallel
master.exec.task.number=20
# master heartbeat interval
master.heartbeat.interval=10
# master commit task retry times
master.task.commit.retryTimes=5
# master commit task interval
master.task.commit.interval=100
# only less than cpu avg load, master server can work. default value : the number of cpu cores * 2
#master.max.cpuload.avg=100
# only larger than reserved memory, master server can work. default value : physical memory * 1/10, unit is G.
master.reserved.memory=0.1
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# worker execute thread num
worker.exec.threads=100
# worker heartbeat interval
worker.heartbeat.interval=10
# submit the number of tasks at a time
worker.fetch.task.num = 3
# only less than cpu avg load, worker server can work. default value : the number of cpu cores * 2
#worker.max.cpuload.avg=10
# only larger than reserved memory, worker server can work. default value : physical memory * 1/6, unit is G.
worker.reserved.memory=1
\ No newline at end of file
......@@ -405,9 +405,14 @@ public class DataSourceService extends BaseService{
datasource = JSONObject.parseObject(parameter, SQLServerDataSource.class);
Class.forName(Constants.COM_SQLSERVER_JDBC_DRIVER);
break;
case DB2:
datasource = JSONObject.parseObject(parameter, DB2ServerDataSource.class);
Class.forName(Constants.COM_DB2_JDBC_DRIVER);
break;
default:
break;
}
if(datasource != null){
connection = DriverManager.getConnection(datasource.getJdbcUrl(), datasource.getUser(), datasource.getPassword());
}
......@@ -487,6 +492,7 @@ public class DataSourceService extends BaseService{
separator = "&";
} else if (Constants.HIVE.equals(type.name())
|| Constants.SPARK.equals(type.name())
|| Constants.DB2.equals(type.name())
|| Constants.SQLSERVER.equals(type.name())) {
separator = ";";
}
......@@ -509,7 +515,9 @@ public class DataSourceService extends BaseService{
for (Map.Entry<String, String> entry: map.entrySet()) {
otherSb.append(String.format("%s=%s%s", entry.getKey(), entry.getValue(), separator));
}
otherSb.deleteCharAt(otherSb.length() - 1);
if (!Constants.DB2.equals(type.name())) {
otherSb.deleteCharAt(otherSb.length() - 1);
}
parameterMap.put(Constants.OTHER, otherSb);
}
......@@ -549,6 +557,9 @@ public class DataSourceService extends BaseService{
} else if (Constants.SQLSERVER.equals(type.name())) {
sb.append(Constants.JDBC_SQLSERVER);
sb.append(host).append(":").append(port);
}else if (Constants.DB2.equals(type.name())) {
sb.append(Constants.JDBC_DB2);
sb.append(host).append(":").append(port);
}
return sb.toString();
......
......@@ -235,9 +235,18 @@ public class ResourcesService extends BaseService {
}
}
// updateProcessInstance data
//get the file suffix
String suffix = originResourceName.substring(originResourceName.lastIndexOf("."));
//if the name without suffix then add it ,else use the origin name
String nameWithSuffix = name;
if(!name.endsWith(suffix)){
nameWithSuffix = nameWithSuffix + suffix;
}
// updateResource data
Date now = new Date();
resource.setAlias(name);
resource.setAlias(nameWithSuffix);
resource.setDescription(desc);
resource.setUpdateTime(now);
......
......@@ -96,8 +96,12 @@ public class UsersService extends BaseService {
String queue) throws Exception {
Map<String, Object> result = new HashMap<>(5);
if (!CheckUtils.checkUserParams(userName, userPassword, email, phone)) {
putMsg(result, Status.REQUEST_PARAMS_NOT_VALID_ERROR,userName);
//check all user params
String msg = this.checkUserParams(userName, userPassword, email, phone);
if (!StringUtils.isEmpty(msg)) {
putMsg(result, Status.REQUEST_PARAMS_NOT_VALID_ERROR,msg);
return result;
}
if (!isAdmin(loginUser)) {
......@@ -687,4 +691,32 @@ public class UsersService extends BaseService {
private boolean checkTenantExists(int tenantId) {
return tenantMapper.queryById(tenantId) != null ? true : false;
}
/**
*
* @param userName
* @param password
* @param email
* @param phone
* @return if check failed return the field, otherwise return null
*/
private String checkUserParams(String userName, String password, String email, String phone) {
String msg = null;
if (!CheckUtils.checkUserName(userName)) {
msg = userName;
} else if (!CheckUtils.checkPassword(password)) {
msg = password;
} else if (!CheckUtils.checkEmail(email)) {
msg = email;
} else if (!CheckUtils.checkPhone(phone)) {
msg = phone;
}
return msg;
}
}
......@@ -31,16 +31,6 @@ public final class Constants {
*/
public static final String ZOOKEEPER_PROPERTIES_PATH = "zookeeper.properties";
/**
* worker properties path
*/
public static final String WORKER_PROPERTIES_PATH = "worker.properties";
/**
* master properties path
*/
public static final String MASTER_PROPERTIES_PATH = "master.properties";
/**
* hadoop properties path
*/
......@@ -330,7 +320,7 @@ public final class Constants {
/**
* email regex
*/
public static final Pattern REGEX_MAIL_NAME = Pattern.compile("^([a-z0-9A-Z]+[-|\\.]?)+[a-z0-9A-Z]@([a-z0-9A-Z]+(-[a-z0-9A-Z]+)?\\.)+[a-zA-Z]{2,}$");
public static final Pattern REGEX_MAIL_NAME = Pattern.compile("^([a-z0-9A-Z]+[_|\\-|\\.]?)+[a-z0-9A-Z]@([a-z0-9A-Z]+(-[a-z0-9A-Z]+)?\\.)+[a-zA-Z]{2,}$");
/**
* read permission
......@@ -652,6 +642,13 @@ public final class Constants {
*/
public static final String JDBC_SQLSERVER_CLASS_NAME = "com.microsoft.sqlserver.jdbc.SQLServerDriver";
/**
* DB2
*/
public static final String JDBC_DB2_CLASS_NAME = "com.ibm.db2.jcc.DB2Driver";
/**
* spark params constant
*/
......@@ -1003,6 +1000,7 @@ public final class Constants {
public static final String COM_CLICKHOUSE_JDBC_DRIVER = "ru.yandex.clickhouse.ClickHouseDriver";
public static final String COM_ORACLE_JDBC_DRIVER = "oracle.jdbc.driver.OracleDriver";
public static final String COM_SQLSERVER_JDBC_DRIVER = "com.microsoft.sqlserver.jdbc.SQLServerDriver";
public static final String COM_DB2_JDBC_DRIVER = "com.ibm.db2.jcc.DB2Driver";
/**
* database type
......@@ -1014,6 +1012,7 @@ public final class Constants {
public static final String CLICKHOUSE = "CLICKHOUSE";
public static final String ORACLE = "ORACLE";
public static final String SQLSERVER = "SQLSERVER";
public static final String DB2 = "DB2";
/**
* jdbc url
......@@ -1024,6 +1023,7 @@ public final class Constants {
public static final String JDBC_CLICKHOUSE = "jdbc:clickhouse://";
public static final String JDBC_ORACLE = "jdbc:oracle:thin:@//";
public static final String JDBC_SQLSERVER = "jdbc:sqlserver://";
public static final String JDBC_DB2 = "jdbc:db2://";
public static final String ADDRESS = "address";
......
......@@ -32,6 +32,7 @@ public enum DbType {
* 4 clickhouse
* 5 oracle
* 6 sqlserver
* 7 db2
*/
MYSQL(0, "mysql"),
POSTGRESQL(1, "postgresql"),
......@@ -39,7 +40,8 @@ public enum DbType {
SPARK(3, "spark"),
CLICKHOUSE(4, "clickhouse"),
ORACLE(5, "oracle"),
SQLSERVER(6, "sqlserver");
SQLSERVER(6, "sqlserver"),
DB2(7, "db2");
DbType(int code, String descp){
this.code = code;
......
......@@ -14,36 +14,62 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master;
package org.apache.dolphinscheduler.common.job.db;
import org.apache.dolphinscheduler.common.IStoppable;
import org.apache.commons.configuration.Configuration;
import org.apache.commons.lang3.StringUtils;
import org.apache.dolphinscheduler.common.Constants;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.CommandLineRunner;
import org.springframework.context.annotation.ComponentScan;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
/**
* master server
* data source of DB2 Server
*/
public abstract class AbstractServer implements IStoppable {
public class DB2ServerDataSource extends BaseDataSource {
private static final Logger logger = LoggerFactory.getLogger(DB2ServerDataSource.class);
/**
* abstract server onfiguration
* gets the JDBC url for the data source connection
* @return
*/
protected static Configuration conf;
@Override
public String getJdbcUrl() {
String jdbcUrl = getAddress();
if (jdbcUrl.lastIndexOf("/") != (jdbcUrl.length() - 1)) {
jdbcUrl += "/";
}
/**
* heartbeat interval, unit second
*/
protected int heartBeatInterval;
jdbcUrl += getDatabase();
if (StringUtils.isNotEmpty(getOther())) {
jdbcUrl += ":" + getOther();
}
return jdbcUrl;
}
/**
* gracefully stop
* @param cause why stopping
* test whether the data source can be connected successfully
* @throws Exception
*/
@Override
public abstract void stop(String cause);
}
public void isConnectable() throws Exception {
Connection con = null;
try {
Class.forName(Constants.COM_DB2_JDBC_DRIVER);
con = DriverManager.getConnection(getJdbcUrl(), getUser(), getPassword());
} finally {
if (con != null) {
try {
con.close();
} catch (SQLException e) {
logger.error("DB2 Server datasource try conn close conn error", e);
throw e;
}
}
}
}
}
......@@ -46,6 +46,8 @@ public class DataSourceFactory {
return JSONUtils.parseObject(parameter, OracleDataSource.class);
case SQLSERVER:
return JSONUtils.parseObject(parameter, SQLServerDataSource.class);
case DB2:
return JSONUtils.parseObject(parameter, DB2ServerDataSource.class);
default:
return null;
}
......@@ -83,6 +85,9 @@ public class DataSourceFactory {
case SQLSERVER:
Class.forName(Constants.JDBC_SQLSERVER_CLASS_NAME);
break;
case DB2:
Class.forName(Constants.JDBC_DB2_CLASS_NAME);
break;
default:
logger.error("not support sql type: {},can't load class", dbType);
throw new IllegalArgumentException("not support sql type,can't load class");
......
......@@ -267,6 +267,23 @@ public class OSUtils {
return os.startsWith("Windows");
}
/**
* check memory and cpu usage
* @return check memory and cpu usage
*/
public static Boolean checkResource(double systemCpuLoad, double systemReservedMemory){
// judging usage
double loadAverage = OSUtils.loadAverage();
//
double availablePhysicalMemorySize = OSUtils.availablePhysicalMemorySize();
if(loadAverage > systemCpuLoad || availablePhysicalMemorySize < systemReservedMemory){
logger.warn("load or availablePhysicalMemorySize(G) is too high, it's availablePhysicalMemorySize(G):{},loadAvg:{}", availablePhysicalMemorySize , loadAverage);
return false;
}else{
return true;
}
}
/**
* check memory and cpu usage
......
......@@ -16,18 +16,19 @@
*/
package org.apache.dolphinscheduler.server.master;
import org.apache.commons.configuration.ConfigurationException;
import org.apache.commons.configuration.PropertiesConfiguration;
import org.apache.commons.lang3.StringUtils;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.IStoppable;
import org.apache.dolphinscheduler.common.thread.Stopper;
import org.apache.dolphinscheduler.common.thread.ThreadPoolExecutors;
import org.apache.dolphinscheduler.common.thread.ThreadUtils;
import org.apache.dolphinscheduler.common.utils.OSUtils;
import org.apache.dolphinscheduler.dao.ProcessDao;
import org.apache.dolphinscheduler.server.master.config.MasterConfig;
import org.apache.dolphinscheduler.server.master.runner.MasterSchedulerThread;
import org.apache.dolphinscheduler.server.quartz.ProcessScheduleJob;
import org.apache.dolphinscheduler.server.quartz.QuartzExecutors;
import org.apache.dolphinscheduler.server.utils.SpringApplicationContext;
import org.apache.dolphinscheduler.server.zk.ZKMasterClient;
import org.quartz.SchedulerException;
import org.slf4j.Logger;
......@@ -45,7 +46,7 @@ import java.util.concurrent.TimeUnit;
* master server
*/
@ComponentScan("org.apache.dolphinscheduler")
public class MasterServer extends AbstractServer {
public class MasterServer implements IStoppable {
/**
* logger of MasterServer
......@@ -73,6 +74,19 @@ public class MasterServer extends AbstractServer {
*/
private ExecutorService masterSchedulerService;
/**
* spring application context
* only use it for initialization
*/
@Autowired
private SpringApplicationContext springApplicationContext;
/**
* master config
*/
@Autowired
private MasterConfig masterConfig;
/**
* master server startup
......@@ -91,26 +105,10 @@ public class MasterServer extends AbstractServer {
@PostConstruct
public void run(){
try {
conf = new PropertiesConfiguration(Constants.MASTER_PROPERTIES_PATH);
}catch (ConfigurationException e){
logger.error("load configuration failed : " + e.getMessage(),e);
System.exit(1);
}
masterSchedulerService = ThreadUtils.newDaemonSingleThreadExecutor("Master-Scheduler-Thread");
zkMasterClient = ZKMasterClient.getZKMasterClient(processDao);
// heartbeat interval
heartBeatInterval = conf.getInt(Constants.MASTER_HEARTBEAT_INTERVAL,
Constants.defaultMasterHeartbeatInterval);
// master exec thread pool num
int masterExecThreadNum = conf.getInt(Constants.MASTER_EXEC_THREADS,
Constants.defaultMasterExecThreadNum);
heartbeatMasterService = ThreadUtils.newDaemonThreadScheduledExecutor("Master-Main-Thread",Constants.defaulMasterHeartbeatThreadNum);
// heartbeat thread implement
......@@ -121,13 +119,13 @@ public class MasterServer extends AbstractServer {
// regular heartbeat
// delay 5 seconds, send heartbeat every 30 seconds
heartbeatMasterService.
scheduleAtFixedRate(heartBeatThread, 5, heartBeatInterval, TimeUnit.SECONDS);
scheduleAtFixedRate(heartBeatThread, 5, masterConfig.getMasterHeartbeatInterval(), TimeUnit.SECONDS);
// master scheduler thread
MasterSchedulerThread masterSchedulerThread = new MasterSchedulerThread(
zkMasterClient,
processDao,conf,
masterExecThreadNum);
processDao,
masterConfig.getMasterExecThreads());
// submit master scheduler thread
masterSchedulerService.execute(masterSchedulerThread);
......
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.config;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
@Component
public class MasterConfig {
@Value("${master.exec.threads:100}")
private int masterExecThreads;
@Value("${master.exec.task.num:20}")
private int masterExecTaskNum;
@Value("${master.heartbeat.interval:10}")
private int masterHeartbeatInterval;
@Value("${master.task.commit.retryTimes:5}")
private int masterTaskCommitRetryTimes;
@Value("${master.task.commit.interval:100}")
private int masterTaskCommitInterval;
@Value("${master.max.cpuload.avg:100}")
private double masterMaxCpuloadAvg;
@Value("${master.reserved.memory:0.1}")
private double masterReservedMemory;
public int getMasterExecThreads() {
return masterExecThreads;
}
public void setMasterExecThreads(int masterExecThreads) {
this.masterExecThreads = masterExecThreads;
}
public int getMasterExecTaskNum() {
return masterExecTaskNum;
}
public void setMasterExecTaskNum(int masterExecTaskNum) {
this.masterExecTaskNum = masterExecTaskNum;
}
public int getMasterHeartbeatInterval() {
return masterHeartbeatInterval;
}
public void setMasterHeartbeatInterval(int masterHeartbeatInterval) {
this.masterHeartbeatInterval = masterHeartbeatInterval;
}
public int getMasterTaskCommitRetryTimes() {
return masterTaskCommitRetryTimes;
}
public void setMasterTaskCommitRetryTimes(int masterTaskCommitRetryTimes) {
this.masterTaskCommitRetryTimes = masterTaskCommitRetryTimes;
}
public int getMasterTaskCommitInterval() {
return masterTaskCommitInterval;
}
public void setMasterTaskCommitInterval(int masterTaskCommitInterval) {
this.masterTaskCommitInterval = masterTaskCommitInterval;
}
public double getMasterMaxCpuloadAvg() {
return masterMaxCpuloadAvg;
}
public void setMasterMaxCpuloadAvg(double masterMaxCpuloadAvg) {
this.masterMaxCpuloadAvg = masterMaxCpuloadAvg;
}
public double getMasterReservedMemory() {
return masterReservedMemory;
}
public void setMasterReservedMemory(double masterReservedMemory) {
this.masterReservedMemory = masterReservedMemory;
}
}
......@@ -16,7 +16,6 @@
*/
package org.apache.dolphinscheduler.server.master.runner;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.queue.ITaskQueue;
import org.apache.dolphinscheduler.common.queue.TaskQueueFactory;
import org.apache.dolphinscheduler.dao.AlertDao;
......@@ -24,9 +23,8 @@ import org.apache.dolphinscheduler.dao.ProcessDao;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.dao.utils.BeanContext;
import org.apache.commons.configuration.Configuration;
import org.apache.commons.configuration.ConfigurationException;
import org.apache.commons.configuration.PropertiesConfiguration;
import org.apache.dolphinscheduler.server.master.config.MasterConfig;
import org.apache.dolphinscheduler.server.utils.SpringApplicationContext;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
......@@ -73,18 +71,9 @@ public class MasterBaseTaskExecThread implements Callable<Boolean> {
protected boolean cancel;
/**
* load configuration file
* master config
*/
private static Configuration conf;
static {
try {
conf = new PropertiesConfiguration(Constants.MASTER_PROPERTIES_PATH);
} catch (ConfigurationException e) {
logger.error(e.getMessage(), e);
System.exit(1);
}
}
private MasterConfig masterConfig;
/**
* constructor of MasterBaseTaskExecThread
......@@ -98,6 +87,7 @@ public class MasterBaseTaskExecThread implements Callable<Boolean> {
this.taskQueue = TaskQueueFactory.getTaskQueueInstance();
this.cancel = false;
this.taskInstance = taskInstance;
this.masterConfig = SpringApplicationContext.getBean(MasterConfig.class);
}
/**
......@@ -120,10 +110,8 @@ public class MasterBaseTaskExecThread implements Callable<Boolean> {
* @return TaskInstance
*/
protected TaskInstance submit(){
Integer commitRetryTimes = conf.getInt(Constants.MASTER_COMMIT_RETRY_TIMES,
Constants.defaultMasterCommitRetryTimes);
Integer commitRetryInterval = conf.getInt(Constants.MASTER_COMMIT_RETRY_INTERVAL,
Constants.defaultMasterCommitRetryInterval);
Integer commitRetryTimes = masterConfig.getMasterTaskCommitRetryTimes();
Integer commitRetryInterval = masterConfig.getMasterTaskCommitInterval();
int retryTimes = 1;
......
......@@ -16,6 +16,9 @@
*/
package org.apache.dolphinscheduler.server.master.runner;
import com.alibaba.fastjson.JSONObject;
import org.apache.commons.io.FileUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.*;
import org.apache.dolphinscheduler.common.graph.DAG;
......@@ -24,22 +27,16 @@ import org.apache.dolphinscheduler.common.model.TaskNodeRelation;
import org.apache.dolphinscheduler.common.process.ProcessDag;
import org.apache.dolphinscheduler.common.thread.Stopper;
import org.apache.dolphinscheduler.common.thread.ThreadUtils;
import org.apache.dolphinscheduler.dao.DaoFactory;
import org.apache.dolphinscheduler.common.utils.*;
import org.apache.dolphinscheduler.dao.ProcessDao;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.dao.utils.DagHelper;
import org.apache.dolphinscheduler.server.master.config.MasterConfig;
import org.apache.dolphinscheduler.server.utils.AlertManager;
import com.alibaba.fastjson.JSONObject;
import org.apache.commons.configuration.Configuration;
import org.apache.commons.configuration.ConfigurationException;
import org.apache.commons.configuration.PropertiesConfiguration;
import org.apache.commons.io.FileUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.dolphinscheduler.common.utils.*;
import org.apache.dolphinscheduler.server.utils.SpringApplicationContext;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import java.io.File;
import java.io.IOException;
......@@ -131,9 +128,9 @@ public class MasterExecThread implements Runnable {
private ProcessDao processDao;
/**
* load configuration file
* master config
*/
private static Configuration conf;
private MasterConfig masterConfig;
/**
* constructor of MasterExecThread
......@@ -144,23 +141,13 @@ public class MasterExecThread implements Runnable {
this.processDao = processDao;
this.processInstance = processInstance;
int masterTaskExecNum = conf.getInt(Constants.MASTER_EXEC_TASK_THREADS,
Constants.defaultMasterTaskExecNum);
this.masterConfig = SpringApplicationContext.getBean(MasterConfig.class);
int masterTaskExecNum = masterConfig.getMasterExecTaskNum();
this.taskExecService = ThreadUtils.newDaemonFixedThreadExecutor("Master-Task-Exec-Thread",
masterTaskExecNum);
}
static {
try {
conf = new PropertiesConfiguration(Constants.MASTER_PROPERTIES_PATH);
}catch (ConfigurationException e){
logger.error("load configuration failed : " + e.getMessage(),e);
System.exit(1);
}
}
@Override
public void run() {
......@@ -927,7 +914,7 @@ public class MasterExecThread implements Runnable {
* @return boolean
*/
private boolean canSubmitTaskToQueue() {
return OSUtils.checkResource(conf, true);
return OSUtils.checkResource(masterConfig.getMasterMaxCpuloadAvg(), masterConfig.getMasterReservedMemory());
}
......
......@@ -16,6 +16,8 @@
*/
package org.apache.dolphinscheduler.server.master.runner;
import org.apache.curator.framework.imps.CuratorFrameworkState;
import org.apache.curator.framework.recipes.locks.InterProcessMutex;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.thread.Stopper;
import org.apache.dolphinscheduler.common.thread.ThreadUtils;
......@@ -24,10 +26,9 @@ import org.apache.dolphinscheduler.common.zk.AbstractZKClient;
import org.apache.dolphinscheduler.dao.ProcessDao;
import org.apache.dolphinscheduler.dao.entity.Command;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.server.master.config.MasterConfig;
import org.apache.dolphinscheduler.server.utils.SpringApplicationContext;
import org.apache.dolphinscheduler.server.zk.ZKMasterClient;
import org.apache.commons.configuration.Configuration;
import org.apache.curator.framework.imps.CuratorFrameworkState;
import org.apache.curator.framework.recipes.locks.InterProcessMutex;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
......@@ -65,23 +66,23 @@ public class MasterSchedulerThread implements Runnable {
private int masterExecThreadNum;
/**
* Configuration of MasterSchedulerThread
* master config
*/
private final Configuration conf;
private MasterConfig masterConfig;
/**
* constructor of MasterSchedulerThread
* @param zkClient zookeeper master client
* @param processDao process dao
* @param conf conf
* @param masterExecThreadNum master exec thread num
*/
public MasterSchedulerThread(ZKMasterClient zkClient, ProcessDao processDao, Configuration conf, int masterExecThreadNum){
public MasterSchedulerThread(ZKMasterClient zkClient, ProcessDao processDao, int masterExecThreadNum){
this.processDao = processDao;
this.zkMasterClient = zkClient;
this.conf = conf;
this.masterExecThreadNum = masterExecThreadNum;
this.masterExecService = ThreadUtils.newDaemonFixedThreadExecutor("Master-Exec-Thread",masterExecThreadNum);
this.masterConfig = SpringApplicationContext.getBean(MasterConfig.class);
}
/**
......@@ -97,7 +98,7 @@ public class MasterSchedulerThread implements Runnable {
InterProcessMutex mutex = null;
try {
if(OSUtils.checkResource(conf, true)){
if(OSUtils.checkResource(masterConfig.getMasterMaxCpuloadAvg(), masterConfig.getMasterReservedMemory())){
if (zkMasterClient.getZkClient().getState() == CuratorFrameworkState.STARTED) {
// create distributed lock with the root node path of the lock space as /dolphinscheduler/lock/failover/master
......
......@@ -16,11 +16,10 @@
*/
package org.apache.dolphinscheduler.server.worker;
import org.apache.commons.configuration.ConfigurationException;
import org.apache.commons.configuration.PropertiesConfiguration;
import org.apache.commons.lang.StringUtils;
import org.apache.curator.framework.recipes.locks.InterProcessMutex;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.IStoppable;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.enums.TaskType;
import org.apache.dolphinscheduler.common.queue.ITaskQueue;
......@@ -34,9 +33,9 @@ import org.apache.dolphinscheduler.common.zk.AbstractZKClient;
import org.apache.dolphinscheduler.dao.AlertDao;
import org.apache.dolphinscheduler.dao.ProcessDao;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.server.master.AbstractServer;
import org.apache.dolphinscheduler.server.utils.ProcessUtils;
import org.apache.dolphinscheduler.server.utils.SpringApplicationContext;
import org.apache.dolphinscheduler.server.worker.config.WorkerConfig;
import org.apache.dolphinscheduler.server.worker.runner.FetchTaskThread;
import org.apache.dolphinscheduler.server.zk.ZKWorkerClient;
import org.slf4j.Logger;
......@@ -57,7 +56,7 @@ import java.util.concurrent.TimeUnit;
* worker server
*/
@ComponentScan("org.apache.dolphinscheduler")
public class WorkerServer extends AbstractServer {
public class WorkerServer implements IStoppable {
/**
* logger
......@@ -115,12 +114,12 @@ public class WorkerServer extends AbstractServer {
*/
private CountDownLatch latch;
/**
* If inside combined server, WorkerServer no need to await on CountDownLatch
*/
@Value("${server.is-combined-server:false}")
private Boolean isCombinedServer;
@Autowired
private WorkerConfig workerConfig;
/**
* master server startup
*
......@@ -138,13 +137,6 @@ public class WorkerServer extends AbstractServer {
@PostConstruct
public void run(){
try {
conf = new PropertiesConfiguration(Constants.WORKER_PROPERTIES_PATH);
}catch (ConfigurationException e){
logger.error("load configuration failed",e);
System.exit(1);
}
zkWorkerClient = ZKWorkerClient.getZKWorkerClient();
this.taskQueue = TaskQueueFactory.getTaskQueueInstance();
......@@ -153,10 +145,6 @@ public class WorkerServer extends AbstractServer {
this.fetchTaskExecutorService = ThreadUtils.newDaemonSingleThreadExecutor("Worker-Fetch-Thread-Executor");
// heartbeat interval
heartBeatInterval = conf.getInt(Constants.WORKER_HEARTBEAT_INTERVAL,
Constants.defaultWorkerHeartbeatInterval);
heartbeatWorkerService = ThreadUtils.newDaemonThreadScheduledExecutor("Worker-Heartbeat-Thread-Executor", Constants.defaulWorkerHeartbeatThreadNum);
// heartbeat thread implement
......@@ -166,7 +154,7 @@ public class WorkerServer extends AbstractServer {
// regular heartbeat
// delay 5 seconds, send heartbeat every 30 seconds
heartbeatWorkerService.scheduleAtFixedRate(heartBeatThread, 5, heartBeatInterval, TimeUnit.SECONDS);
heartbeatWorkerService.scheduleAtFixedRate(heartBeatThread, 5, workerConfig.getWorkerHeartbeatInterval(), TimeUnit.SECONDS);
// kill process thread implement
Runnable killProcessThread = getKillProcessThread();
......@@ -174,13 +162,8 @@ public class WorkerServer extends AbstractServer {
// submit kill process thread
killExecutorService.execute(killProcessThread);
// get worker number of concurrent tasks
int taskNum = conf.getInt(Constants.WORKER_FETCH_TASK_NUM,Constants.defaultWorkerFetchTaskNum);
// new fetch task thread
FetchTaskThread fetchTaskThread = new FetchTaskThread(taskNum,zkWorkerClient, processDao,conf, taskQueue);
FetchTaskThread fetchTaskThread = new FetchTaskThread(zkWorkerClient, processDao, taskQueue);
// submit fetch task thread
fetchTaskExecutorService.execute(fetchTaskThread);
......
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.worker.config;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
@Component
public class WorkerConfig {
@Value("${worker.exec.threads:100}")
private int workerExecThreads;
@Value("${worker.heartbeat.interval:10}")
private int workerHeartbeatInterval;
@Value("${worker.fetch.task.num:3}")
private int workerFetchTaskNum;
@Value("${worker.max.cpuload.avg:10}")
private int workerMaxCpuloadAvg;
@Value("${master.reserved.memory:1}")
private double workerReservedMemory;
public int getWorkerExecThreads() {
return workerExecThreads;
}
public void setWorkerExecThreads(int workerExecThreads) {
this.workerExecThreads = workerExecThreads;
}
public int getWorkerHeartbeatInterval() {
return workerHeartbeatInterval;
}
public void setWorkerHeartbeatInterval(int workerHeartbeatInterval) {
this.workerHeartbeatInterval = workerHeartbeatInterval;
}
public int getWorkerFetchTaskNum() {
return workerFetchTaskNum;
}
public void setWorkerFetchTaskNum(int workerFetchTaskNum) {
this.workerFetchTaskNum = workerFetchTaskNum;
}
public double getWorkerReservedMemory() {
return workerReservedMemory;
}
public void setWorkerReservedMemory(double workerReservedMemory) {
this.workerReservedMemory = workerReservedMemory;
}
public int getWorkerMaxCpuloadAvg() {
return workerMaxCpuloadAvg;
}
public void setWorkerMaxCpuloadAvg(int workerMaxCpuloadAvg) {
this.workerMaxCpuloadAvg = workerMaxCpuloadAvg;
}
}
......@@ -16,6 +16,8 @@
*/
package org.apache.dolphinscheduler.server.worker.runner;
import org.apache.commons.lang3.StringUtils;
import org.apache.curator.framework.recipes.locks.InterProcessMutex;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.queue.ITaskQueue;
import org.apache.dolphinscheduler.common.thread.Stopper;
......@@ -28,10 +30,9 @@ import org.apache.dolphinscheduler.dao.ProcessDao;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.dao.entity.Tenant;
import org.apache.dolphinscheduler.dao.entity.WorkerGroup;
import org.apache.dolphinscheduler.server.utils.SpringApplicationContext;
import org.apache.dolphinscheduler.server.worker.config.WorkerConfig;
import org.apache.dolphinscheduler.server.zk.ZKWorkerClient;
import org.apache.commons.configuration.Configuration;
import org.apache.commons.lang3.StringUtils;
import org.apache.curator.framework.recipes.locks.InterProcessMutex;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
......@@ -77,11 +78,6 @@ public class FetchTaskThread implements Runnable{
*/
private int workerExecNums;
/**
* conf
*/
private Configuration conf;
/**
* task instance
*/
......@@ -92,18 +88,22 @@ public class FetchTaskThread implements Runnable{
*/
Integer taskInstId;
public FetchTaskThread(int taskNum, ZKWorkerClient zkWorkerClient,
ProcessDao processDao, Configuration conf,
/**
* worker config
*/
private WorkerConfig workerConfig;
public FetchTaskThread(ZKWorkerClient zkWorkerClient,
ProcessDao processDao,
ITaskQueue taskQueue){
this.taskNum = taskNum;
this.zkWorkerClient = zkWorkerClient;
this.processDao = processDao;
this.workerExecNums = conf.getInt(Constants.WORKER_EXEC_THREADS,
Constants.defaultWorkerExecThreadNum);
// worker thread pool executor
this.workerExecService = ThreadUtils.newDaemonFixedThreadExecutor("Worker-Fetch-Task-Thread",workerExecNums);
this.conf = conf;
this.taskQueue = taskQueue;
this.workerConfig = SpringApplicationContext.getBean(WorkerConfig.class);
this.taskNum = workerConfig.getWorkerFetchTaskNum();
this.workerExecNums = workerConfig.getWorkerExecThreads();
// worker thread pool executor
this.workerExecService = ThreadUtils.newDaemonFixedThreadExecutor("Worker-Fetch-Task-Thread", workerExecNums);
this.taskInstance = null;
}
......@@ -145,7 +145,7 @@ public class FetchTaskThread implements Runnable{
try {
ThreadPoolExecutor poolExecutor = (ThreadPoolExecutor) workerExecService;
//check memory and cpu usage and threads
boolean runCheckFlag = OSUtils.checkResource(this.conf, false) && checkThreadCount(poolExecutor);
boolean runCheckFlag = OSUtils.checkResource(workerConfig.getWorkerMaxCpuloadAvg(), workerConfig.getWorkerReservedMemory()) && checkThreadCount(poolExecutor);
Thread.sleep(Constants.SLEEP_TIME_MILLIS);
......
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# master execute thread num
master.exec.threads=100
# master execute task number in parallel
master.exec.task.number=20
# master heartbeat interval
master.heartbeat.interval=10
# master commit task retry times
master.task.commit.retryTimes=5
# master commit task interval
master.task.commit.interval=100
# only less than cpu avg load, master server can work. default value : the number of cpu cores * 2
master.max.cpuload.avg=100
# only larger than reserved memory, master server can work. default value : physical memory * 1/10, unit is G.
master.reserved.memory=0.1
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# worker execute thread num
worker.exec.threads=100
# worker heartbeat interval
worker.heartbeat.interval=10
# submit the number of tasks at a time
worker.fetch.task.num = 3
# only less than cpu avg load, worker server can work. default value : the number of cpu cores * 2
#worker.max.cpuload.avg=10
# only larger than reserved memory, worker server can work. default value : physical memory * 1/6, unit is G.
worker.reserved.memory=1
\ No newline at end of file
......@@ -25,6 +25,7 @@
size="xsmall"
type="ghost"
@click="_copy('gbudp-' + $index)"
:key="$index"
:data-clipboard-text="item.prop + ' = ' +item.value"
:class="'gbudp-' + $index">
<b style="color: #2A455B;">{{item.prop}}</b> = {{item.value}}
......@@ -38,12 +39,12 @@
&nbsp;
</div>
</div>
<div class="list list-t" v-for="(item,key,$index) in list.localParams">
<div class="list list-t" v-for="(item,key,$index) in list.localParams" :key="$index">
<div class="task-name">Task({{$index}}):{{key}}</div>
<div class="var-cont" v-if="item.localParamsList.length">
<template v-for="(el,index) in item.localParamsList">
<x-button size="xsmall" type="ghost" @click="_copy('copy-part-' + index)" :data-clipboard-text="_rtClipboard(el,item.taskType)" :class="'copy-part-' + index">
<span v-for="(e,k,i) in el">
<x-button size="xsmall" type="ghost" :key="index" @click="_copy('copy-part-' + index)" :data-clipboard-text="_rtClipboard(el,item.taskType)" :class="'copy-part-' + index">
<span v-for="(e,k,i) in el" :key="i">
<template v-if="item.taskType === 'SQL' || item.taskType === 'PROCEDURE'">
<template v-if="(k !== 'direct' && k !== 'type')">
<b style="color: #2A455B;">{{k}}</b> = {{e}}
......
......@@ -32,6 +32,7 @@
<x-radio :label="'CLICKHOUSE'">CLICKHOUSE</x-radio>
<x-radio :label="'ORACLE'">ORACLE</x-radio>
<x-radio :label="'SQLSERVER'">SQLSERVER</x-radio>
<x-radio :label="'DB2'" class="radio-label-last" >DB2</x-radio>
</x-radio-group>
</template>
</m-list-box-f>
......@@ -362,7 +363,7 @@
})
}
},
mounted () {
},
components: { mPopup, mListBoxF }
......@@ -402,5 +403,10 @@
}
}
}
.radio-label-last {
margin-left: 0px !important;
}
}
</style>
......@@ -17,7 +17,7 @@
<template>
<div>
<div class="servers-wrapper mysql-model content-wrap" v-show="mysqlList.length">
<div class="row" v-for="(item,$index) in mysqlList">
<div class="row" v-for="(item,$index) in mysqlList" :key="$index">
<div class="col-md-12">
<div class="db-title">
<span>{{item.dbType+$t('Manage')}}</span>
......
......@@ -18,7 +18,7 @@
<m-list-construction :title="'Master ' + $t('Manage')">
<template slot="content">
<div class="servers-wrapper" v-show="masterList.length">
<div class="row-box" v-for="(item,$index) in masterList">
<div class="row-box" v-for="(item,$index) in masterList" :key="$index">
<div class="row-title">
<div class="left">
<span class="sp">IP: {{item.host}}</span>
......
......@@ -18,7 +18,7 @@
<m-list-construction :title="'Worker ' + $t('Manage')">
<template slot="content">
<div class="servers-wrapper" v-show="workerList.length">
<div class="row-box" v-for="(item,$index) in workerList">
<div class="row-box" v-for="(item,$index) in workerList" :key="$index">
<div class="row-title">
<div class="left">
<span class="sp">IP: {{item.host}}</span>
......
......@@ -17,7 +17,7 @@
<template>
<div class="ans-input email-model">
<div class="clearfix input-element" :class="disabled ? 'disabled' : ''">
<span class="tag-wrapper" v-for="(item,$index) in activeList" :class="activeIndex === $index ? 'active' : ''">
<span class="tag-wrapper" v-for="(item,$index) in activeList" :key="$index" :class="activeIndex === $index ? 'active' : ''">
<span class="tag-text">{{item}}</span>
<i class="remove-tag ans-icon-close" @click.stop="_del($index)" v-if="!disabled"></i>
</span>
......@@ -31,7 +31,7 @@
<div class="ans-scroller" style=" max-height: 300px;">
<div class="scroll-area-wrapper scroll-transition">
<ul class="dropdown-container">
<li class="ans-option" v-for="(item,$index) in emailList" @click.stop="_selectEmail($index + 1)">
<li class="ans-option" v-for="(item,$index) in emailList" @click.stop="_selectEmail($index + 1)" :key="$index">
<span class="default-option-class" :class="index === ($index + 1) ? 'active' : ''">{{item}}</span>
</li>
</ul>
......
......@@ -42,7 +42,7 @@
<a href="javascript:">
<span>Node Type</span>
</a>
<a href="javascript:" v-for="(k,v) in tasksType">
<a href="javascript:" v-for="(k,v) in tasksType" :key="v">
<i class="fa fa-circle" :style="{color:k.color}"></i>
<span>{{v}}</span>
</a>
......@@ -51,7 +51,7 @@
<a href="javascript:">
<span>{{$t('Task Status')}}</span>
</a>
<a href="javascript:" v-for="(item) in tasksState">
<a href="javascript:" v-for="(item) in tasksState" :key="item.id">
<i class="fa fa-square" :style="{color:item.color}"></i>
<span>{{item.desc}}</span>
</a>
......
......@@ -23,7 +23,7 @@
<a href="javascript:">
<span>{{$t('Task Status')}}</span>
</a>
<a href="javascript:" v-for="(item) in tasksState">
<a href="javascript:" v-for="(item) in tasksState" :key="item.id">
<i class="fa fa-square" :style="{color:item.color}"></i>
<span>{{item.desc}}</span>
</a>
......
......@@ -123,7 +123,7 @@
}
})
this.$nextTick(() => {
this.queueId = this.queueList[0]
this.queueId = this.queueList[0].id
})
resolve()
})
......
......@@ -87,6 +87,11 @@ export default {
id: 6,
code: 'SQLSERVER',
disabled: false
},
{
id: 7,
code: 'DB2',
disabled: false
}
],
// Alarm interface
......
......@@ -44,7 +44,7 @@
</template>
<ul v-if="item.isOpen && item.children.length">
<template v-for="(el,index) in item.children">
<router-link :to="{ name: el.path}" tag="li" active-class="active" v-if="el.disabled">
<router-link :to="{ name: el.path}" tag="li" active-class="active" v-if="el.disabled" :key="index">
<span>{{el.name}}</span>
</router-link>
</template>
......
......@@ -36,7 +36,7 @@
</div>-->
<div class="scrollbar tf-content">
<ul>
<li v-for="(item,$index) in sourceList" :key="item.id" @click="_ckSource(item)">
<li v-for="(item,$index) in sourceList" :key="$index" @click="_ckSource(item)">
<span>{{item.name}}</span>
<a href="javascript:"></a>
</li>
......@@ -62,7 +62,7 @@
</div>-->
<div class="scrollbar tf-content">
<ul>
<li v-for="(item,$index) in targetList" :key="item.id" @click="_ckTarget(item)"><span>{{item.name}}</span></li>
<li v-for="(item,$index) in targetList" :key="$index" @click="_ckTarget(item)"><span>{{item.name}}</span></li>
</ul>
</div>
</div>
......
......@@ -386,19 +386,7 @@ sed -i ${txt} "s#zookeeper.connection.timeout.*#zookeeper.connection.timeout=${z
sed -i ${txt} "s#zookeeper.retry.sleep.*#zookeeper.retry.sleep=${zkRetrySleep}#g" conf/zookeeper.properties
sed -i ${txt} "s#zookeeper.retry.maxtime.*#zookeeper.retry.maxtime=${zkRetryMaxtime}#g" conf/zookeeper.properties
sed -i ${txt} "s#master.exec.threads.*#master.exec.threads=${masterExecThreads}#g" conf/master.properties
sed -i ${txt} "s#master.exec.task.number.*#master.exec.task.number=${masterExecTaskNum}#g" conf/master.properties
sed -i ${txt} "s#master.heartbeat.interval.*#master.heartbeat.interval=${masterHeartbeatInterval}#g" conf/master.properties
sed -i ${txt} "s#master.task.commit.retryTimes.*#master.task.commit.retryTimes=${masterTaskCommitRetryTimes}#g" conf/master.properties
sed -i ${txt} "s#master.task.commit.interval.*#master.task.commit.interval=${masterTaskCommitInterval}#g" conf/master.properties
sed -i ${txt} "s#master.reserved.memory.*#master.reserved.memory=${masterReservedMemory}#g" conf/master.properties
sed -i ${txt} "s#server.port.*#server.port=${masterPort}#g" conf/application-master.properties
sed -i ${txt} "s#worker.exec.threads.*#worker.exec.threads=${workerExecThreads}#g" conf/worker.properties
sed -i ${txt} "s#worker.heartbeat.interval.*#worker.heartbeat.interval=${workerHeartbeatInterval}#g" conf/worker.properties
sed -i ${txt} "s#worker.fetch.task.num.*#worker.fetch.task.num=${workerFetchTaskNum}#g" conf/worker.properties
sed -i ${txt} "s#worker.reserved.memory.*#worker.reserved.memory=${workerReservedMemory}#g" conf/worker.properties
sed -i ${txt} "s#server.port.*#server.port=${workerPort}#g" conf/application-worker.properties
......
......@@ -109,6 +109,7 @@
<maven-source-plugin.version>2.4</maven-source-plugin.version>
<maven-surefire-plugin.version>2.18.1</maven-surefire-plugin.version>
<jacoco.version>0.8.4</jacoco.version>
<jcip.version>1.0</jcip.version>
<maven.deploy.skip>false</maven.deploy.skip>
</properties>
......@@ -517,6 +518,7 @@
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>${maven-assembly-plugin.version}</version>
</plugin>
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册