<imgsrc="./docs/pics/ms-operator-in-kubeflow.png"alt="ms-operator in Kubeflow"width=600/>
<imgsrc="./docs/pics/ms-operator-in-kubeflow.png"alt="ms-operator in Kubeflow"width=600/>
...
@@ -96,7 +92,7 @@ First, pull the ms-operator image from [Docker Hub](https://hub.docker.com/r/min
...
@@ -96,7 +92,7 @@ First, pull the ms-operator image from [Docker Hub](https://hub.docker.com/r/min
docker pull mindspore/ms-operator:latest
docker pull mindspore/ms-operator:latest
```
```
Or you build the ms-operator image on local machine:
Or you can build the ms-operator image on local machine:
```
```
docker build . -t mindspore/ms-operator
docker build . -t mindspore/ms-operator
```
```
...
@@ -110,7 +106,7 @@ mindspore/ms-operator latest 729960ae415e 28 h
...
@@ -110,7 +106,7 @@ mindspore/ms-operator latest 729960ae415e 28 h
The MindSpore image we download from docker hub is `0.1.0-alpha` version:
The MindSpore image we download from docker hub is `0.1.0-alpha` version:
```
```
REPOSITORY TAG IMAGE ID CREATED SIZE
REPOSITORY TAG IMAGE ID CREATED SIZE
mindspore/mindspore-cpu 0.1.0-alpha 9a124f33ed27 2 hours ago 1.19GB
mindspore/mindspore-cpu 0.1.0-alpha 9a124f33ed27 2 hours ago 1.16GB
```
```
MindSpore supports heterogeneous computing including multiple hardware and
MindSpore supports heterogeneous computing including multiple hardware and
...
@@ -149,7 +145,7 @@ Kubernetes:
...
@@ -149,7 +145,7 @@ Kubernetes:
cd examples/ && kubectl apply -f ms-mnist.yaml
cd examples/ && kubectl apply -f ms-mnist.yaml
```
```
The job is simply importing MindSpore packges, the dataset is already included in the `MNIST_Data` folder, executing only one epoch and printing result which should only consume little time. After the job completed, you should be able to check the job status and see the result logs. You can check the source training code in `examples/` folder.
The job is simply importing MindSpore packages, the dataset is already included in the `MNIST_Data` folder, executing only one epoch and printing result which should only consume little time. After the job completed, you should be able to check the job status and see the result logs. You can check the source training code in `examples/` folder.
```
```
kubectl get pod msjob-mnist && kubectl logs msjob-mnist
kubectl get pod msjob-mnist && kubectl logs msjob-mnist
```
```
...
@@ -186,7 +182,7 @@ including:
...
@@ -186,7 +182,7 @@ including:
- etc.
- etc.
The MindSpore community is driving to collaborate with the Kubeflow community
The MindSpore community is driving to collaborate with the Kubeflow community
as well as making the ms-operator more complex, well-orgnized and its
as well as making the ms-operator more complex, well-organized and its
dependencies up-to-date. All these components make it easy for machine learning
dependencies up-to-date. All these components make it easy for machine learning
engineers and data scientists to leverage cloud assets (public or on-premise)
engineers and data scientists to leverage cloud assets (public or on-premise)
for machine learning workloads.
for machine learning workloads.
...
@@ -283,3 +279,15 @@ including multiple hardware and backends (`CPU`, `GPU`, `Ascend`),
...
@@ -283,3 +279,15 @@ including multiple hardware and backends (`CPU`, `GPU`, `Ascend`),
the device_target of MindSpore is `Ascend` by default.
the device_target of MindSpore is `Ascend` by default.
We define `masterPort` that groups will use to communicate with master service.
We define `masterPort` that groups will use to communicate with master service.
## Community
-[MindSpore Slack](https://join.slack.com/t/mindspore/shared_invite/enQtOTcwMTIxMDI3NjM0LTNkMWM2MzI5NjIyZWU5ZWQ5M2EwMTQ5MWNiYzMxOGM4OWFhZjI4M2E5OGI2YTg3ODU1ODE2Njg1MThiNWI3YmQ) - Ask questions and find answers.
## Contributing
Welcome contributions. See our [Contributor Wiki](https://gitee.com/mindspore/mindspore/blob/master/CONTRIBUTING.md) for more details.