backend-cluster.md 3.9 KB
Newer Older
1
# Cluster Management
2 3 4
In many product environments, backend need to support high throughputs and provide HA to keep robustness,
so you should need cluster management always in product env.
 
5 6
Backend provides several ways to do cluster management. Choose the one you need/want.

wu-sheng's avatar
wu-sheng 已提交
7
- [Zookeeper coordinator](#zookeeper-coordinator). Use Zookeeper to let backend instance detects and communicates
8 9 10
with each other.
- [Kubernetes](#kubernetes). When backend cluster are deployed inside kubernetes, you could choose this
by using k8s native APIs to manage cluster.
wu-sheng's avatar
wu-sheng 已提交
11
- [Consul](#consul). Use Consul as backend cluster management implementor, to coordinate backend instances.
C
caoyixiong 已提交
12
- [Nacos](#nacos). Use Nacos to coordinate backend instances.
13 14 15

## Zookeeper coordinator
Zookeeper is a very common and wide used cluster coordinator. Set the **cluster** module's implementor
16 17
to **zookeeper** in the yml to active. 

18
Required Zookeeper version, 3.4+
19

20 21 22
```yaml
cluster:
  zookeeper:
23 24
    nameSpace: ${SW_NAMESPACE:""}
    hostPort: ${SW_CLUSTER_ZK_HOST_PORT:localhost:2181}
25 26 27 28 29 30 31 32
    # Retry Policy
    baseSleepTimeMs: 1000 # initial amount of time to wait between retries
    maxRetries: 3 # max number of times to retry
```

- `hostPort` is the list of zookeeper servers. Format is `IP1:PORT1,IP2:PORT2,...,IPn:PORTn`
- `hostPort`, `baseSleepTimeMs` and `maxRetries` are settings of Zookeeper curator client.

33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
In some cases, oap default gRPC host and port in core are not suitable for internal communication among the oap nodes.
The following setting are provided to set the hot and port manually, based on your own LAN env.
- internalComHost, the host registered and other oap node use this to communicate with current node.
- internalComPort, the port registered and other oap node use this to communicate with current node.

```yaml
zookeeper:
  nameSpace: ${SW_NAMESPACE:""}
  hostPort: ${SW_CLUSTER_ZK_HOST_PORT:localhost:2181}
  #Retry Policy
  baseSleepTimeMs: ${SW_CLUSTER_ZK_SLEEP_TIME:1000} # initial amount of time to wait between retries
  maxRetries: ${SW_CLUSTER_ZK_MAX_RETRIES:3} # max number of times to retry
  internalComHost: 172.10.4.10
  internalComPort: 11800
``` 

49 50 51 52 53 54 55 56 57 58 59 60 61

## Kubernetes
Require backend cluster are deployed inside kubernetes, guides are in [Deploy in kubernetes](backend-k8s.md).
Set implementor to `kubernetes`.

```yaml
cluster:
  kubernetes:
    watchTimeoutSeconds: 60
    namespace: default
    labelSelector: app=collector,release=skywalking
    uidEnvName: SKYWALKING_COLLECTOR_UID
```
62 63 64 65 66 67 68 69 70 71 72 73

## Consul
Now, consul is becoming a famous system, many of companies and developers using consul to be 
their service discovery solution. Set the **cluster** module's implementor to **consul** in 
the yml to active. 

```yaml
cluster:
  consul:
    serviceName: ${SW_SERVICE_NAME:"SkyWalking_OAP_Cluster"}
    # Consul cluster nodes, example: 10.0.0.1:8500,10.0.0.2:8500,10.0.0.3:8500
    hostPort: ${SW_CLUSTER_CONSUL_HOST_PORT:localhost:8500}
wu-sheng's avatar
wu-sheng 已提交
74
```
75 76 77 78 79

Same as Zookeeper coordinator,
in some cases, oap default gRPC host and port in core are not suitable for internal communication among the oap nodes.
The following setting are provided to set the hot and port manually, based on your own LAN env.
- internalComHost, the host registered and other oap node use this to communicate with current node.
C
caoyixiong 已提交
80 81 82 83 84 85 86 87 88 89 90 91 92
- internalComPort, the port registered and other oap node use this to communicate with current node.


## Nacos
Set the **cluster** module's implementor to **nacos** in 
the yml to active. 

```yaml
cluster:
  nacos:
    serviceName: ${SW_SERVICE_NAME:"SkyWalking_OAP_Cluster"}
    # Nacos cluster nodes, example: 10.0.0.1:8848,10.0.0.2:8848,10.0.0.3:8848
    hostPort: ${SW_CLUSTER_NACOS_HOST_PORT:localhost:8848}
93 94 95 96 97 98 99 100 101 102 103 104
```

## Etcd
Set the **cluster** module's implementor to **etcd** in
the yml to active.

```yaml
cluster:
  etcd:
    serviceName: ${SW_SERVICE_NAME:"SkyWalking_OAP_Cluster"}
    #etcd cluster nodes, example: 10.0.0.1:2379,10.0.0.2:2379,10.0.0.3:2379
    hostPort: ${SW_CLUSTER_ETCD_HOST_PORT:localhost:2379}
C
caoyixiong 已提交
105
```