未验证 提交 84668575 编写于 作者: A Ader Fu 提交者: GitHub

separate commands from output in docs/user (#5802)

* Separate commands from output in docs/user

* Add the Code blocks in the Markdown spec to make docs/user easy to read.
Signed-off-by: NydFu <ader.ydfu@gmail.com>
上级 047e8908
......@@ -74,7 +74,7 @@ You can grant full admin privileges to Dashboard's Service Account by creating b
### Official release
```
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
......@@ -92,7 +92,7 @@ subjects:
### Development release
```
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
......
......@@ -10,7 +10,7 @@ For each of the following snippets for `ServiceAccount` and `ClusterRoleBinding`
We are creating Service Account with name `admin-user` in namespace `kubernetes-dashboard` first.
```
```shell
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
......@@ -25,7 +25,7 @@ EOF
In most cases after provisioning cluster using `kops`, `kubeadm` or any other popular tool, the `ClusterRole` `cluster-admin` already exists in the cluster. We can use it and create only `ClusterRoleBinding` for our `ServiceAccount`.
If it does not exist then you need to create this role first and grant required privileges manually.
```
```shell
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
......@@ -46,7 +46,7 @@ EOF
Now we need to find token we can use to log in. Execute following command:
```bash
```shell
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
```
......@@ -68,7 +68,7 @@ Click `Sign in` button and that's it. You are now logged in as an admin.
Remove the admin `ServiceAccount` and `ClusterRoleBinding`.
```
```shell
kubectl -n kubernetes-dashboard delete serviceaccount admin-user
kubectl -n kubernetes-dashboard delete clusterrolebinding admin-user
```
......
......@@ -15,9 +15,13 @@ As the alternative setup is recommended for advanced users only, we'll not descr
First let's check if `kubectl` is properly configured and has access to the cluster. In case of error follow [this guide](https://kubernetes.io/docs/tasks/tools/install-kubectl/) to install and set up `kubectl`.
```shell
kubectl cluster-info
```
The output is similar to this:
```
$ kubectl cluster-info
# Example output
Kubernetes master is running at https://192.168.30.148:6443
KubeDNS is running at https://192.168.30.148:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
......@@ -26,8 +30,13 @@ To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Start local proxy server.
```shell
kubectl proxy
```
The output is similar to this:
```
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
```
......@@ -42,12 +51,12 @@ http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kube
Instead of `kubectl proxy`, you can use `kubectl port-forward` and access dashboard with simpler URL than using `kubectl proxy`.
```bash
$ kubectl port-forward -n kubernetes-dashboard service/kubernetes-dashboard 8080:443
```shell
kubectl port-forward -n kubernetes-dashboard service/kubernetes-dashboard 8080:443
```
To access Kubernetes Dashboard go to:
```bash
```shell
https://localhost:8080
```
......@@ -57,13 +66,13 @@ This way of accessing Dashboard is only recommended for development environments
Edit `kubernetes-dashboard` service.
```
$ kubectl -n kubernetes-dashboard edit service kubernetes-dashboard
```shell
kubectl -n kubernetes-dashboard edit service kubernetes-dashboard
```
You should see `yaml` representation of the service. Change `type: ClusterIP` to `type: NodePort` and save file. If it's already changed go to next step.
```
```yaml
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
......@@ -92,8 +101,13 @@ status:
Next we need to check port on which Dashboard was exposed.
```shell
kubectl -n kubernetes-dashboard get service kubernetes-dashboard
```
The output is similar to this:
```
$ kubectl -n kubernetes-dashboard get service kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.100.124.90 <nodes> 443:31707/TCP 21h
```
......
......@@ -21,7 +21,7 @@ In case you want to generate certificates on your own you need library like [Ope
A private key and certificate signing request are required to create an SSL certificate. These can be generated with a few simple commands. When the openssl req command asks for a “challenge password”, just press return, leaving the password empty. This password is used by Certificate Authorities to authenticate the certificate owner when they want to revoke their certificate. Since this is a self-signed certificate, there’s no way to revoke it via CRL (Certificate Revocation List).
```
```shell
openssl genrsa -des3 -passout pass:over4chars -out dashboard.pass.key 2048
...
openssl rsa -passin pass:over4chars -in dashboard.pass.key -out dashboard.key
......@@ -39,7 +39,7 @@ A challenge password []:
The self-signed SSL certificate is generated from the `dashboard.key` private key and `dashboard.csr` files.
```
```shell
openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt
```
......
......@@ -12,18 +12,18 @@ By default self-signed certificates are generated and stored in-memory. In case
Custom certificates have to be stored in a secret named `kubernetes-dashboard-certs` in the same namespace as Kubernetes Dashboard. Assuming that you have `tls.crt` and `tls.key` files stored under `$HOME/certs` directory, you should create secret with contents of these files:
```
```shell
kubectl create secret generic kubernetes-dashboard-certs --from-file=$HOME/certs -n kubernetes-dashboard
```
For Dashboard to pickup the certificates, you must pass arguments `--tls-cert-file=/tls.crt` and `--tls-key-file=/tls.key` to the container. You can edit YAML definition and deploy Dashboard in one go:
```
```shell
kubectl create --edit -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
```
Under Deployment section, add arguments to pod definition, it should look as follows:
```
```yaml
containers:
- args:
- --tls-cert-file=/tls.crt
......@@ -37,7 +37,7 @@ This setup is not fully secure. Certificates are not used and Dashboard is expos
To deploy Dashboard execute following command:
```
```shell
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.1/aio/deploy/alternative.yaml
```
......@@ -50,8 +50,8 @@ Besides official releases, there are also development releases, that are pushed
In most of the use cases you need to execute the following command to deploy latest development release:
```
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.1/aio/deploy/head.yaml
```shell
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.1/aio/deploy/head.yaml
```
### Update
......@@ -60,8 +60,13 @@ Once installed, the deployment is not automatically updated. In order to update
Delete all Dashboard pods (assuming that Dashboard is deployed in kubernetes-dashboard namespace):
```
```shell
kubectl -n kubernetes-dashboard delete $(kubectl -n kubernetes-dashboard get pod -o name | grep dashboard)
```
The output is similar to this:
```
pod "dashboard-metrics-scraper-fb986f88d-gnfnk" deleted
pod "kubernetes-dashboard-7d8b9cc8d-npljm" deleted
```
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册