未验证 提交 ad02a6de 编写于 作者: Y Yuan Tang 提交者: GitHub

Add note on the use of kubectl kustomize (#1610)

上级 842b6556
......@@ -13,7 +13,7 @@ The MPI Operator makes it easy to run allreduce-style distributed training on Ku
## Installation
You can deploy the operator with default settings without using `kustomize` by running the following commands:
You can deploy the operator with default settings by running the following commands:
```shell
git clone https://github.com/kubeflow/mpi-operator
......@@ -40,7 +40,7 @@ mpijobs.kubeflow.org 4d
...
```
If it is not included you can add it as follows:
If it is not included you can add it as follows using [kustomize](https://github.com/kubernetes-sigs/kustomize):
```bash
git clone https://github.com/kubeflow/manifests
......@@ -48,6 +48,12 @@ cd manifests/mpi-job/mpi-operator
kustomize build base | kubectl apply -f -
```
Note that since Kubernetes v1.14, `kustomize` became a subcommand in `kubectl` so you can also run the following command instead:
```bash
kubectl kustomize base | kubectl apply -f -
```
## Creating an MPI Job
You can create an MPI job by defining an `MPIJob` config file. See [TensorFlow benchmark example](https://github.com/kubeflow/mpi-operator/blob/master/examples/v1alpha2/tensorflow-benchmarks.yaml) config file for launching a multi-node TensorFlow benchmark training job. You may change the config file based on your requirements.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册