16.用 kubeadm 搭建集群环境.md 21.3 KB
Newer Older
阳明的博客's avatar
阳明的博客 已提交
1 2 3 4 5 6 7 8 9 10 11 12 13 14
# 16. 用 kubeadm 搭建集群环境

## 架构
上节课我们给大家讲解了 k8s 的基本概念与几个主要的组件,我们在了解了 k8s 的基本概念过后,实际上就可以去正式使用了,但是我们前面的课程都是在 katacoda 上面进行的演示,只提供给我们15分钟左右的使用时间,所以最好的方式还是我们自己来手动搭建一套 k8s 的环境,在搭建环境之前,我们再来看一张更丰富的k8s的架构图。
​​![k8s 架构](./images/k8s-structure.jpeg)

* 核心层:Kubernetes 最核心的功能,对外提供 API 构建高层的应用,对内提供插件式应用执行环境
* 应用层:部署(无状态应用、有状态应用、批处理任务、集群应用等)和路由(服务发现、DNS 解析等)
* 管理层:系统度量(如基础设施、容器和网络的度量),自动化(如自动扩展、动态 Provision 等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy 等)
* 接口层:kubectl 命令行工具、客户端 SDK 以及集群联邦
* 生态系统:在接口层之上的庞大容器集群管理调度的生态系统,可以划分为两个范畴
    * Kubernetes 外部:日志、监控、配置管理、CI、CD、Workflow等
    * Kubernetes 内部:CRI、CNI、CVI、镜像仓库、Cloud Provider、集群自身的配置和管理等

阳明的博客's avatar
阳明的博客 已提交
15
在更进一步了解了 k8s 集群的架构后,我们就可以来正式的的安装我们的 k8s 集群环境了,我们这里使用的是`kubeadm`工具来进行集群的搭建。
阳明的博客's avatar
阳明的博客 已提交
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87

`kubeadm``Kubernetes`官方提供的用于快速安装`Kubernetes`集群的工具,通过将集群的各个组件进行容器化安装管理,通过`kubeadm`的方式安装集群比二进制的方式安装要方便不少,但是目录`kubeadm`还处于 beta 状态,还不能用于生产环境,[Using kubeadm to Create a Cluster文档](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/)中已经说明 kubeadm 将会很快能够用于生产环境了。对于现阶段想要用于生产环境的,建议还是参考我们前面的文章:[手动搭建高可用的 kubernetes 集群](https://blog.qikqiak.com/post/manual-install-high-available-kubernetes-cluster/)或者[视频教程](https://www.haimaxy.com/course/pjrqxm/?utm_source=k8s)

## 环境
我们这里准备两台`Centos7`的主机用于安装,后续节点可以根究需要添加即可:
```shell
$ cat /etc/hosts
10.151.30.57 master
10.151.30.62 node01
```

禁用防火墙:
```shell
$ systemctl stop firewalld
$ systemctl disable firewalld
```

禁用SELINUX:
```shell
$ setenforce 0
$ cat /etc/selinux/config
SELINUX=disabled
```

创建`/etc/sysctl.d/k8s.conf`文件,添加如下内容:
```shell
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
```

执行如下命令使修改生效:
```shell
$ modprobe br_netfilter
$ sysctl -p /etc/sysctl.d/k8s.conf
```

## 镜像
如果你的节点上面有科学上网的工具,可以忽略这一步,我们需要提前将所需的`gcr.io`上面的镜像下载到节点上面,当然前提条件是你已经成功安装了`docker。master节点,执行下面的命令:
```shell
docker pull cnych/kube-apiserver-amd64:v1.10.0
docker pull cnych/kube-scheduler-amd64:v1.10.0
docker pull cnych/kube-controller-manager-amd64:v1.10.0
docker pull cnych/kube-proxy-amd64:v1.10.0
docker pull cnych/k8s-dns-kube-dns-amd64:1.14.8
docker pull cnych/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker pull cnych/k8s-dns-sidecar-amd64:1.14.8
docker pull cnych/etcd-amd64:3.1.12
docker pull cnych/flannel:v0.10.0-amd64
docker pull cnych/pause-amd64:3.1

docker tag cnych/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0
docker tag cnych/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0
docker tag cnych/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0
docker tag cnych/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
docker tag cnych/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
docker tag cnych/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker tag cnych/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
docker tag cnych/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12
docker tag cnych/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag cnych/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
```

可以将上面的命令保存为一个 shell 脚本,然后直接执行即可。这些镜像是在 master 节点上需要使用到的镜像,一定要提前下载下来。其他Node,执行下面的命令:
```shell
docker pull cnych/kube-proxy-amd64:v1.10.0
docker pull cnych/flannel:v0.10.0-amd64
docker pull cnych/pause-amd64:3.1
docker pull cnych/kubernetes-dashboard-amd64:v1.8.3
docker pull cnych/heapster-influxdb-amd64:v1.3.3
docker pull cnych/heapster-grafana-amd64:v4.4.3
docker pull cnych/heapster-amd64:v1.4.2
阳明的博客's avatar
阳明的博客 已提交
88 89 90
docker pull cnych/k8s-dns-kube-dns-amd64:1.14.8
docker pull cnych/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker pull cnych/k8s-dns-sidecar-amd64:1.14.8
阳明的博客's avatar
阳明的博客 已提交
91 92 93 94 95

docker tag cnych/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag cnych/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
docker tag cnych/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0

阳明的博客's avatar
阳明的博客 已提交
96 97 98 99
docker tag cnych/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
docker tag cnych/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker tag cnych/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8

阳明的博客's avatar
阳明的博客 已提交
100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135
docker tag cnych/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
docker tag cnych/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3
docker tag cnych/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3
docker tag cnych/heapster-amd64:v1.4.2 k8s.gcr.io/heapster-amd64:v1.4.2
```

上面的这些镜像是在 Node 节点中需要用到的镜像,在 join 节点之前也需要先下载到节点上面。
## 安装 kubeadm、kubelet、kubectl
在确保 docker 安装完成后,上面的相关环境配置也完成了,对应所需要的镜像(如果可以科学上网可以跳过这一步)也下载完成了,现在我们就可以来安装 kubeadm 了,我们这里是通过指定yum 源的方式来进行安装的:
```shell
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
```

当然了,上面的`yum`源也是需要科学上网的,如果不能科学上网的话,我们可以使用阿里云的源进行安装:
```shell
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
```

阳明的博客's avatar
阳明的博客 已提交
136
> **注意:**由于阿里云的源将依赖进行了更改,如果你需要安装 1.10.0 版本的集群的话,需要使用下面的命令分别安装:
阳明的博客's avatar
阳明的博客 已提交
137

阳明的博客's avatar
阳明的博客 已提交
138
```shell
阳明的博客's avatar
阳明的博客 已提交
139 140
$ yum makecache fast 
$ yum install -y kubelet-1.10.0-0
阳明的博客's avatar
阳明的博客 已提交
141
$ yum install -y kubectl-1.10.0-0
阳明的博客's avatar
阳明的博客 已提交
142
$ yum install -y kubeadm-1.10.0-0
阳明的博客's avatar
阳明的博客 已提交
143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258
```

正常情况我们可以都能顺利安装完成上面的文件。

## 配置 kubelet
安装完成后,我们还需要对`kubelet`进行配置,因为用`yum`源的方式安装的`kubelet`生成的配置文件将参数`--cgroup-driver`改成了`systemd`,而 docker 的`cgroup-driver`是`cgroupfs`,这二者必须一致才行,我们可以通过`docker info`命令查看:
```shell
$ docker info |grep Cgroup
Cgroup Driver: cgroupfs
```

修改文件 kubelet 的配置文件`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`,将其中的`KUBELET_CGROUP_ARGS`参数更改成`cgroupfs`:
```shell
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
```

另外还有一个问题是关于交换分区的,之前我们在手动搭建高可用的 kubernetes 集群一文中已经提到过,Kubernetes 从1.8开始要求关闭系统的 Swap ,如果不关闭,默认配置的 kubelet 将无法启动,我们可以通过 kubelet 的启动参数`--fail-swap-on=false`更改这个限制,所以我们需要在上面的配置文件中增加一项配置(在`ExecStart`之前):
```shell
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
```

当然最好的还是将 swap 给关掉,这样能提高 kubelet 的性能。修改完成后,重新加载我们的配置文件即可:
```shell
$ systemctl daemon-reload
```

## 集群安装初始化
到这里我们的准备工作就完成了,接下来我们就可以在`master`节点上用`kubeadm`命令来初始化我们的集群了:
```shell
$ kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.151.30.57
```

命令非常简单,就是`kubeadm init`,后面的参数是需要安装的集群版本,因为我们这里选择`flannel`作为 Pod 的网络插件,所以需要指定`–pod-network-cidr=10.244.0.0/16`,然后是 apiserver 的通信地址,这里就是我们 master 节点的 IP 地址。执行上面的命令,如果出现`running with swap on is not supported. Please disable swap`之类的错误,则我们还需要增加一个参数`–ignore-preflight-errors=Swap`来忽略 swap 的错误提示信息:
```shell
$ kubeadm init \
   --kubernetes-version=v1.10.0 \
   --pod-network-cidr=10.244.0.0/16 \
   --apiserver-advertise-address=10.151.30.57 \
   --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
    [WARNING FileExisting-crictl]: crictl not found in system pathSuggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [ydzs-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.151.30.57]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [ydzs-master1] and IPs [10.151.30.57]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 22.007661 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node ydzs-master1 as master by adding a label and a taint
[markmaster] Master ydzs-master1 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 8xomlq.0cdf2pbvjs2gjho3
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.151.30.57:6443 --token 8xomlq.0cdf2pbvjs2gjho3 --discovery-token-ca-cert-hash sha256:92802317cb393682c1d1356c15e8b4ec8af2b8e5143ffd04d8be4eafb5fae368
```

要注意将上面的加入集群的命令保存下面,如果忘记保存上面的 token 和 sha256 值的话也不用担心,我们可以使用下面的命令来查找:
```shell
$ kubeadm token list
kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
i5gbaw.os1iow5tdo17rwdu   23h       2018-05-18T01:32:55+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
```

要查看 CA 证书的 sha256 的值的话,我们可以使用`openssl`来读取证书获取 sha256 的值:
```shell
$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
e9ca4d9550e698105f1d8fae7ecfd297dd9331ca7d50b5493fa0491b2b4df40c
```

另外还需要注意的是当前版本的 kubeadm 支持的docker版本最大是 17.03,所以要注意下。
上面的信息记录了 kubeadm 初始化整个集群的过程,生成相关的各种证书、kubeconfig 文件、bootstraptoken 等等,后边是使用`kubeadm join`往集群中添加节点时用到的命令,下面的命令是配置如何使用kubectl访问集群的方式:
```shell
阳明的博客's avatar
阳明的博客 已提交
259 260
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
阳明的博客's avatar
阳明的博客 已提交
261
sudo chown $(id -u):$(id -g) $HOME/.kube/config
阳明的博客's avatar
阳明的博客 已提交
262
```
阳明的博客's avatar
阳明的博客 已提交
263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294

最后给出了将节点加入集群的命令:
```shell
kubeadm join 10.151.30.57:6443 --token 8xomlq.0cdf2pbvjs2gjho3 --discovery-token-ca-cert-hash sha256:92802317cb393682c1d1356c15e8b4ec8af2b8e5143ffd04d8be4eafb5fae368
```

我们根据上面的提示配置好 kubectl 后,就可以使用 kubectl 来查看集群的信息了:
```shell
$ kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}
$ kubectl get csr
NAME                                                   AGE       REQUESTOR                 CONDITION
node-csr-8qygb8Hjxj-byhbRHawropk81LHNPqZCTePeWoZs3-g   1h        system:bootstrap:8xomlq   Approved,Issued
$ kubectl get nodes
NAME           STATUS    ROLES     AGE       VERSION
ydzs-master1   Ready     master    3h        v1.10.0
```

如果你的集群安装过程中遇到了其他问题,我们可以使用下面的命令来进行重置:
```shell
$ kubeadm reset
$ ifconfig cni0 down && ip link delete cni0
$ ifconfig flannel.1 down && ip link delete flannel.1
$ rm -rf /var/lib/cni/
```

## 安装 Pod Network
接下来我们来安装`flannel`网络插件,很简单,和安装普通的 POD 没什么两样:
```shell
阳明的博客's avatar
阳明的博客 已提交
295 296
$ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
$ kubectl apply -f  kube-flannel.yml
阳明的博客's avatar
阳明的博客 已提交
297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346
clusterrole.rbac.authorization.k8s.io "flannel" created
clusterrolebinding.rbac.authorization.k8s.io "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset.extensions "kube-flannel-ds" created
```

另外需要注意的是如果你的节点有多个网卡的话,需要在 kube-flannel.yml 中使用`--iface`参数指定集群主机内网网卡的名称,否则可能会出现 dns 无法解析。flanneld 启动参数加上`--iface=<iface-name>`
```yaml
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=eth0
```

安装完成后使用 kubectl get pods 命令可以查看到我们集群中的组件运行状态,如果都是Running 状态的话,那么恭喜你,你的 master 节点安装成功了。
```shell
$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE
kube-system   etcd-ydzs-master1                      1/1       Running   0          10m
kube-system   kube-apiserver-ydzs-master1            1/1       Running   0          10m
kube-system   kube-controller-manager-ydzs-master1   1/1       Running   0          10m
kube-system   kube-dns-86f4d74b45-f5595              3/3       Running   0          10m
kube-system   kube-flannel-ds-qxjs2                  1/1       Running   0          1m
kube-system   kube-proxy-vf5fg                       1/1       Running   0          10m
kube-system   kube-scheduler-ydzs-master1            1/1       Running   0          10m
```

## 添加节点
同样的上面的环境配置、docker 安装、kubeadmin、kubelet、kubectl 这些都在Node(10.151.30.62)节点安装配置好过后,我们就可以直接在 Node 节点上执行`kubeadm join`命令了(上面初始化的时候有),同样加上参数`--ignore-preflight-errors=Swap`:
```shell
$ kubeadm join 10.151.30.57:6443 --token 8xomlq.0cdf2pbvjs2gjho3 --discovery-token-ca-cert-hash sha256:92802317cb393682c1d1356c15e8b4ec8af2b8e5143ffd04d8be4eafb5fae368 --ignore-preflight-errors=Swap
[preflight] Running pre-flight checks.
    [WARNING Swap]: running with swap on is not supported. Please disable swap
    [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[discovery] Trying to connect to API Server "10.151.30.57:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.151.30.57:6443"
[discovery] Requesting info from "https://10.151.30.57:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.151.30.57:6443"
[discovery] Successfully established connection with API Server "10.151.30.57:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.
```

阳明的博客's avatar
阳明的博客 已提交
347 348 349 350 351
如果出现下面的错误信息:[discovery] Failed to request cluster info, will try again: [Get https://10.151.30.27:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid],应该是 master 和 node 之间时间不同步造成的,执行 ntp 时间同步后,重新 init、join 即可。

如果 join 的时候出现下面的错误信息:[ERROR CRI]: unable to check if the container runtime at "/var/run/dockershim.sock" is running: fork/exec /usr/bin/crictl -r /var/run/dockershim.sock info: no such file or directory,这个是因为 cri-tools 版本造成的错误,可以卸载掉即可:`yum remove cri-tools`。


阳明的博客's avatar
阳明的博客 已提交
352 353 354 355 356 357 358 359 360 361
我们可以看到该节点已经加入到集群中去了,然后我们把 master 节点的`~/.kube/config`文件拷贝到当前节点对应的位置即可使用 kubectl 命令行工具了。
```shell
$ kubectl get nodes
NAME           STATUS    ROLES     AGE       VERSION
evjfaxic       Ready     <none>    1h        v1.10.0
ydzs-master1   Ready     master    3h        v1.10.0
```

到这里就算我们的集群部署成功了,接下来就可以根据我们的需要安装一些附加的插件,比如 Dashboard、Heapster、Ingress-Controller 等等,这些插件的安装方法就和我们之前手动安装集群的方式方法一样了,这里就不在重复了,有问题可以在`github`上留言讨论。
![kubeadm dashboard](./images/kubeadm-dashboard.png)
阳明的博客's avatar
阳明的博客 已提交
362 363 364 365 366 367 368 369 370



---
[点击查看本文视频](https://youdianzhishi.com/course/6n8xd6/)

扫描下面的二维码(或微信搜索`k8s技术圈`)关注我们的微信公众帐号,在微信公众帐号中回复 **加群** 即可加入到我们的 kubernetes 讨论群里面共同学习。

![k8s技术圈二维码](https://www.qikqiak.com/img/posts/qrcode_for_gh_d6dd87b6ceb4_430.jpg)