k8s_aws_en.md 22.1 KB
Newer Older
1
# Distributed PaddlePaddle Training on AWS with Kubernetes
Y
Yi Wang 已提交
2

3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
We will show you step by step on how to run distributed PaddlePaddle training on AWS cluster with Kubernetes. Let's start from core concepts.

## Distributed PaddlePaddle Training Core Concepts

### Distributed Training Job

A distributed training job is represented by a [Kubernetes job](https://kubernetes.io/docs/user-guide/jobs/#what-is-a-job).

Each Kuberentes job is described by a job config file, which specifies the information like the number of [pods](https://kubernetes.io/docs/user-guide/pods/#what-is-a-pod) in the job and environment variables.

In a distributed training job, we would:

1. prepare partitioned training data and configuration file on a distributed file system (in this tutorial we use Amazon Elastic File System), and
1. create and submit the Kubernetes job config to the Kubernetes cluster to start the training job.

### Parameter Servers and Trainers

There are two roles in a PaddlePaddle cluster: *parameter server (pserver)* and *trainer*. Each parameter server process maintains a shard of the global model. Each trainer has its local copy of the model, and uses its local data to update the model. During the training process, trainers send model updates to parameter servers, parameter servers are responsible for aggregating these updates, so that trainers can synchronize their local copy with the global model.

<center>![Model is partitioned into two shards. Managed by two parameter servers respectively.](src/pserver_and_trainer.png)</center>

In order to communicate with pserver, trainer needs to know the ip address of each pserver. In kubernetes it's better to use a service discovery mechanism (e.g., DNS hostname) rather than static ip address, since any pserver's pod may be killed and a new pod could be schduled onto another node of different ip address. However, now we are using static ip. This will be improved.

Parameter server and trainer are packaged into a same docker image. They will run once pod is scheduled by kubernetes job.

### Trainer ID

Each trainer process requires a trainer ID, a zero-based index value, passed in as a command-line parameter. The trainer process thus reads the data partition indexed by this ID.

### Training

The entry-point of a container is a shell script. It can see some environment variables pre-defined by Kubernetes. This includes one that gives the job's identity, which can be used in a remote call to the Kubernetes apiserver that lists all pods in the job.

We rank each pod by sorting them by their ips. The rank of each pod could be the "pod ID". Because we run one trainer and one parameter server in each pod, we can use this "pod ID" as the trainer ID. A detailed workflow of the entry-point script is as follows:

1. Query the api server to get pod information, and assign the `trainer_id` by sorting the ip.
1. Copy the training data from EFS persistent volume into container.
1. Parse the `paddle pserver` and `paddle trainer` startup parameters from environment variables, and then start up the processes.
1. Trainer with `train_id` 0 will automatically write results onto EFS volume.


## PaddlePaddle on AWS with Kubernetes

### Create AWS Account and IAM Account
Y
Yi Wang 已提交
47

H
Helin Wang 已提交
48
Under each AWS account, we can create multiple [IAM](http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) users. This allows us to grant some privileges to each IAM user and to create/operate AWS clusters as an IAM user.
Y
Yi Wang 已提交
49 50 51 52

To sign up an AWS account, please
follow
[this guide](http://docs.aws.amazon.com/lambda/latest/dg/setting-up.html).
53
To create IAM users and user groups under an AWS account, please
Y
Yi Wang 已提交
54 55 56
follow
[this guide](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html).

57
Please be aware that this tutorial needs the following privileges for the user in IAM:
Y
Yi Wang 已提交
58 59 60 61 62 63 64 65 66 67

- AmazonEC2FullAccess
- AmazonS3FullAccess
- AmazonRoute53FullAccess
- AmazonRoute53DomainsFullAccess
- AmazonElasticFileSystemFullAccess
- AmazonVPCFullAccess
- IAMUserSSHKeys
- IAMFullAccess
- NetworkAdministrator
68
- AWSKeyManagementServicePowerUser
Y
Yi Wang 已提交
69 70


71 72 73
### Download kube-aws and kubectl

#### kube-aws
74

75
[kube-aws](https://github.com/coreos/kube-aws) is a CLI tool to automate cluster deployment to AWS.
76 77 78 79 80 81 82 83 84 85 86 87 88 89

Import the CoreOS Application Signing Public Key:

```
gpg2 --keyserver pgp.mit.edu --recv-key FC8A365E
```

Validate the key fingerprint:

```
gpg2 --fingerprint FC8A365E
```
The correct key fingerprint is `18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E`

H
Helin Wang 已提交
90
We can download `kube-aws` from its [release page](https://github.com/coreos/kube-aws/releases). In this tutorial, we use version 0.9.1
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114

Validate the tarball's GPG signature:

```
PLATFORM=linux-amd64
 # Or
PLATFORM=darwin-amd64

gpg2 --verify kube-aws-${PLATFORM}.tar.gz.sig kube-aws-${PLATFORM}.tar.gz
```

Extract the binary:

```
tar zxvf kube-aws-${PLATFORM}.tar.gz
```

Add kube-aws to your path:

```
mv ${PLATFORM}/kube-aws /usr/local/bin
```


115 116 117
#### kubectl

[kubectl](https://kubernetes.io/docs/user-guide/kubectl-overview/) is a command line interface for running commands against Kubernetes clusters.
118

119
Download `kubectl` from the Kubernetes release artifact site with the `curl` tool.
120 121

```
122 123 124 125 126
# OS X
curl -O https://storage.googleapis.com/kubernetes-release/release/"$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)"/bin/darwin/amd64/kubectl

# Linux
curl -O https://storage.googleapis.com/kubernetes-release/release/"$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)"/bin/linux/amd64/kubectl
127 128
```

129 130 131 132 133 134
Make the kubectl binary executable and move it to your PATH (e.g. `/usr/local/bin`):

```
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
```
135

136
### Configure AWS Credentials
Z
zhouti 已提交
137

138
First check out [this](http://docs.aws.amazon.com/cli/latest/userguide/installing.html) for installing the AWS command line interface.
Z
zhouti 已提交
139 140

And then configure your AWS account information:
141 142 143 144

```
aws configure
```
Z
zhouti 已提交
145 146


147
Fill in the required fields:
Z
zhouti 已提交
148

149 150 151 152

```
AWS Access Key ID: YOUR_ACCESS_KEY_ID
AWS Secrete Access Key: YOUR_SECRETE_ACCESS_KEY
153
Default region name: us-west-2
154 155 156
Default output format: json
```

157 158
`YOUR_ACCESS_KEY_ID`, and `YOUR_SECRETE_ACCESS_KEY` is the IAM key and secret from [Create AWS Account and IAM Account](#create-aws-account-and-iam-account)

159
Verify that your credentials work by describing any instances you may already have running on your account:
Z
zhouti 已提交
160

161 162 163 164
```
aws ec2 describe-instances
```

165
### Define Cluster Parameters
166

167
#### EC2 key pair
168 169 170

The keypair that will authenticate SSH access to your EC2 instances. The public half of this key pair will be configured on each CoreOS node.

171
Follow [EC2 Keypair User Guide](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) to create a EC2 key pair
172

173 174
After creating a key pair, you will use the key pair name to configure the cluster.

175 176 177 178
Key pairs are only available to EC2 instances in the same region. We are using us-west-2 in our tutorial, so make sure to creat key pairs in that region (Oregon).

Your browser will download a `key-name.pem` file which is the key to access the EC2 instances. We will use it later.

179 180

#### KMS key
Z
zhouti 已提交
181

182
Amazon KMS keys are used to encrypt and decrypt cluster TLS assets. If you already have a KMS Key that you would like to use, you can skip creating a new key and provide the Arn string for your existing key.
183

184
You can create a KMS key with the aws command line tool:
185

186
```
187
aws kms --region=us-west-2 create-key --description="kube-aws assets"
188 189 190 191
{
    "KeyMetadata": {
        "CreationDate": 1458235139.724,
        "KeyState": "Enabled",
192
        "Arn": "arn:aws:kms:us-west-2:aaaaaaaaaaaaa:key/xxxxxxxxxxxxxxxxxxx",
193 194 195 196 197 198 199 200 201
        "AWSAccountId": "xxxxxxxxxxxxx",
        "Enabled": true,
        "KeyUsage": "ENCRYPT_DECRYPT",
        "KeyId": "xxxxxxxxx",
        "Description": "kube-aws assets"
    }
}
```

202
We will need to use the value of `Arn` later.
203

204
And then let's add several inline policies in your IAM user permission.
205

206
Go to [IAM Console](https://console.aws.amazon.com/iam/home?region=us-west-2#/home). Click on button `Users`, click user that we just created, and then click on `Add inline policy` button, and select `Custom Policy`.
207

208
Paste into following inline policies:
209 210

```
211
 (Caution: node_0, node_1, node_2 directories represents PaddlePaddle node and train_id, not the Kubernetes node){
212 213 214 215 216 217 218 219 220 221
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1482205552000",
            "Effect": "Allow",
            "Action": [
                "kms:Decrypt",
                "kms:Encrypt"
            ],
            "Resource": [
222
                "arn:aws:kms:*:AWS_ACCOUNT_ID:key/*"
223
            ]
224 225
        },
		{
226 227 228 229 230 231 232 233
            "Sid": "Stmt1482205746000",
            "Effect": "Allow",
            "Action": [
                "cloudformation:CreateStack",
                "cloudformation:UpdateStack",
                "cloudformation:DeleteStack",
                "cloudformation:DescribeStacks",
                "cloudformation:DescribeStackResource",
234 235
                "cloudformation:GetTemplate",
                "cloudformation:DescribeStackEvents"
236 237
            ],
            "Resource": [
238
                "arn:aws:cloudformation:us-west-2:AWS_ACCOUNT_ID:stack/MY_CLUSTER_NAME/*"
239 240 241 242 243 244
            ]
        }
    ]
}
```

245 246 247 248 249 250 251
`AWS_ACCOUNT_ID`: You can get it from following command line:

```
aws sts get-caller-identity --output text --query Account
```

`MY_CLUSTER_NAME`: Pick a MY_CLUSTER_NAME that you like, you will use it later as well.
252

253
#### External DNS name
254

255 256
When the cluster is created, the controller will expose the TLS-secured API on a DNS name.

257
DNS name should have a CNAME points to cluster DNS name or an A record points to the cluster IP address.
258

259
We will need to use DNS name later in tutorial.
260

261
#### S3 bucket
262 263

You need to create an S3 bucket before startup the Kubernetes cluster.
264

265
There are some bugs in aws cli in creating S3 bucket, so let's use the [S3 Console](https://console.aws.amazon.com/s3/home?region=us-west-2).
266

267
Click on `Create Bucket`, fill in a unique BUCKET_NAME, and make sure region is us-west-2 (Oregon).
268 269


270
#### Initialize Assets
Z
zhouti 已提交
271

272 273 274 275 276 277 278 279 280 281
Create a directory on your local machine to hold the generated assets:

```
$ mkdir my-cluster
$ cd my-cluster
```

Initialize the cluster CloudFormation stack with the KMS Arn, key pair name, and DNS name from the previous step:

```
282 283 284
kube-aws init \
--cluster-name=MY_CLUSTER_NAME \
--external-dns-name=MY_EXTERNAL_DNS_NAME \
285 286
--region=us-west-2 \
--availability-zone=us-west-2a \
287
--key-name=KEY_PAIR_NAME \
288
--kms-key-arn="arn:aws:kms:us-west-2:xxxxxxxxxx:key/xxxxxxxxxxxxxxxxxxx"
289 290
```

291 292 293
`MY_CLUSTER_NAME`: the one you picked in [KMS key](#kms-key)

`MY_EXTERNAL_DNS_NAME`: see [External DNS name](#external-dns-name)
294

295 296 297 298
`KEY_PAIR_NAME`: see [EC2 key pair](#ec2-key-pair)

`--kms-key-arn`: the "Arn" in [KMS key](#kms-key)

299
Here `us-west-2a` is used for parameter `--availability-zone`, but supported availability zone varies among AWS accounts.
300

301
Please check if `us-west-2a` is supported by `aws ec2 --region us-west-2 describe-availability-zones`, if not switch to other supported availability zone. (e.g., `us-west-2a`, or `us-west-2b`)
302

303

304 305
There will now be a cluster.yaml file in the asset directory. This is the main configuration file for your cluster.

306 307
By default `kube-aws` will only create one worker node. Let's edit `cluster.yaml` and change `workerCount` from 1 to 3.

308

309
#### Render contents of the asset directory
310 311 312 313

In the simplest case, you can have kube-aws generate both your TLS identities and certificate authority for you.

```
314
kube-aws render credentials --generate-ca
315 316 317 318 319
```

The next command generates the default set of cluster assets in your asset directory.

```
320
kube-aws render stack
321
```
322
Assets (templates and credentials) that are used to create, update and interact with your Kubernetes cluster will be created under your current folder.
323 324


325
### Kubernetes Cluster Start Up
326

327
#### Create the instances defined in the CloudFormation template
328

329
Now let's create your cluster (choose any `PREFIX` for the command below):
330 331

```
332
kube-aws up --s3-uri s3://BUCKET_NAME/PREFIX
333 334
```

335 336 337
`BUCKET_NAME`: the bucket name that you used in [S3 bucket](#s3-bucket)


338
#### Configure DNS
339

340 341 342 343 344
You can invoke `kube-aws status` to get the cluster API endpoint after cluster creation.

```
$ kube-aws status
Cluster Name:		paddle-cluster
345
Controller DNS Name:	paddle-cl-ElbAPISe-EEOI3EZPR86C-531251350.us-west-2.elb.amazonaws.com
346 347
```

348 349 350 351
If you own a DNS name, set the A record to any of the above ip. __Or__ you can set up CNAME point to `Controller DNS Name` (`paddle-cl-ElbAPISe-EEOI3EZPR86C-531251350.us-west-2.elb.amazonaws.com`)

##### Find IP address

352 353 354
Use command `dig` to check the load balancer hostname to get the ip address.

```
355
$ dig paddle-cl-ElbAPISe-EEOI3EZPR86C-531251350.us-west-2.elb.amazonaws.com
356 357

;; QUESTION SECTION:
358
;paddle-cl-ElbAPISe-EEOI3EZPR86C-531251350.us-west-2.elb.amazonaws.com. IN A
359 360

;; ANSWER SECTION:
361 362
paddle-cl-ElbAPISe-EEOI3EZPR86C-531251350.us-west-2.elb.amazonaws.com. 59 IN A 54.241.164.52
paddle-cl-ElbAPISe-EEOI3EZPR86C-531251350.us-west-2.elb.amazonaws.com. 59 IN A 54.67.102.112
363 364 365 366
```

In the above output, both ip `54.241.164.52`, `54.67.102.112` will work.

367

368
#### Access the cluster
369 370 371 372

Once the API server is running, you should see:

```
373 374 375 376 377 378
$ kubectl --kubeconfig=kubeconfig get nodes 
NAME                                       STATUS    AGE
ip-10-0-0-134.us-west-2.compute.internal   Ready     6m
ip-10-0-0-238.us-west-2.compute.internal   Ready     6m
ip-10-0-0-50.us-west-2.compute.internal    Ready     6m
ip-10-0-0-55.us-west-2.compute.internal    Ready     6m
379
```
380

381

382
### Setup Elastic File System for Cluster
383

384
Training data is usually served on a distributed filesystem, we use Elastic File System (EFS) on AWS.
385

386 387 388 389 390
1. Create security group for EFS in [security group console](https://us-west-2.console.aws.amazon.com/ec2/v2/home?region=us-west-2#SecurityGroups:sort=groupId)
  1. Look up security group id for `paddle-cluster-sg-worker` (`sg-055ee37d` in the image below)
  <center>![](src/worker_security_group.png)</center>
  2. Add security group `paddle-efs` with `ALL TCP` inbound rule and custom source as group id of `paddle-cluster-sg-worker`. And VPC of `paddle-cluster-vpc`. Make sure availability zone is same as the one you used in [Initialize Assets](#initialize-assets).
  <center>![](src/add_security_group.png)</center>
391

392
2. Create the Elastic File System in [EFS console](https://us-west-2.console.aws.amazon.com/efs/home?region=us-west-2#/wizard/1) with `paddle-cluster-vpc` VPC. Make sure subnet is `paddle-cluster-Subnet0` andd security group is `paddle-efs`.
L
Luo Tao 已提交
393
<center>![](src/create_efs.png)</center>
Z
zhouti 已提交
394

395

396
### Start PaddlePaddle Training Demo on AWS
397

398
#### Configure Kubernetes Volume that Points to EFS
399

400
First we need to create a [PersistentVolume](https://kubernetes.io/docs/user-guide/persistent-volumes/) to provision EFS volumn.
401

402
Save following snippet as `pv.yaml`
403
```
404 405 406 407 408 409 410 411 412 413 414 415
apiVersion: v1
kind: PersistentVolume
metadata:
  name: efsvol
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: EFS_DNS_NAME
    path: "/"
416 417
```

418
`EFS_DNS_NAME`: DNS name as shown in description of `paddle-efs` that we created. Looks similar to `fs-2cbf7385.efs.us-west-2.amazonaws.com`
419

420
Run following command to create a persistent volumn:
421
```
422
kubectl --kubeconfig=kubeconfig create -f pv.yaml
423 424
```

425
Next let's create a [PersistentVolumeClaim](https://kubernetes.io/docs/user-guide/persistent-volumes/) to claim the persistent volume.
426

427
Save following snippet as `pvc.yaml`.
428
```
429 430 431 432 433 434 435 436 437 438
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: efsvol
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Gi
439 440
```

441
Run following command to create a persistent volumn claim:
442
```
443
kubectl --kubeconfig=kubeconfig create -f pvc.yaml
444 445
```

446
#### Prepare Training Data
447

448
We will now launch a kubernetes job that downloads, saves and evenly splits training data into 3 shards on the persistent volumn that we just created.
449

450
save following snippet as `paddle-data-job.yaml`
451
```
452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477
apiVersion: batch/v1
kind: Job
metadata:
  name: paddle-data
spec:
  template:
    metadata:
      name: pi
    spec:
      containers:
      - name: paddle-data
        image: paddledev/paddle-tutorial:k8s_data
        imagePullPolicy: Always
        volumeMounts:
        - mountPath: "/efs"
          name: efs
        env:
        - name: OUT_DIR
          value: /efs/paddle-cluster-job
        - name: SPLIT_COUNT
          value: "3"
      volumes:
        - name: efs
          persistentVolumeClaim:
            claimName: efsvol
      restartPolicy: Never
478 479
```

480
Run following command to launch the job:
481
```
482
kubectl --kubeconfig=kubeconfig create -f paddle-data-job.yaml
483 484
```

485
Job may take 7 min to finish, use following command to check job status. Do not proceed until `SUCCESSFUL` for `paddle-data` job is `1`
486
```
487 488 489
$ kubectl --kubeconfig=kubeconfig get jobs
NAME          DESIRED   SUCCESSFUL   AGE
paddle-data   1         1            6m
490 491
```

H
Helin Wang 已提交
492
Data preparation is done by docker image `paddledev/paddle-tutorial:k8s_data`, see [here](src/k8s_data/README.md) for how to build this docker image and source code.
493

494
#### Start Training
495

496
Now we are ready to start paddle training job. Save following snippet as `paddle-cluster-job.yaml`
497 498 499 500 501 502 503 504 505 506 507 508 509
```
apiVersion: batch/v1
kind: Job
metadata:
  name: paddle-cluster-job
spec:
  parallelism: 3
  completions: 3
  template:
    metadata:
      name: paddle-cluster-job
    spec:
      volumes:
510 511 512
      - name: efs
        persistentVolumeClaim:
          claimName: efsvol
513 514
      containers:
      - name: trainer
515
        image: paddledev/paddle-tutorial:k8s_train
516 517 518 519 520 521 522 523 524
        command: ["bin/bash",  "-c", "/root/start.sh"]
        env:
        - name: JOB_NAME
          value: paddle-cluster-job
        - name: JOB_PATH
          value: /home/jobpath
        - name: JOB_NAMESPACE
          value: default
        - name: TRAIN_CONFIG_DIR
525
          value: quick_start
526 527 528 529 530 531 532 533 534 535
        - name: CONF_PADDLE_NIC
          value: eth0
        - name: CONF_PADDLE_PORT
          value: "7164"
        - name: CONF_PADDLE_PORTS_NUM
          value: "2"
        - name: CONF_PADDLE_PORTS_NUM_SPARSE
          value: "2"
        - name: CONF_PADDLE_GRADIENT_NUM
          value: "3"
536 537
        - name: TRAINER_COUNT
          value: "3"
538
        volumeMounts:
539 540
        - mountPath: "/home/jobpath"
          name: efs
541
        ports:
542 543 544 545 546 547 548 549 550 551 552 553
        - name: jobport0
          hostPort: 7164
          containerPort: 7164
        - name: jobport1
          hostPort: 7165
          containerPort: 7165
        - name: jobport2
          hostPort: 7166
          containerPort: 7166
        - name: jobport3
          hostPort: 7167
          containerPort: 7167
554 555 556
      restartPolicy: Never
```

557
`parallelism: 3, completions: 3` means this job will simultaneously start 3 PaddlePaddle pods, and this job will be finished when there are 3 finished pods.
558

559
`env` field represents container's environment variables, we specify PaddlePaddle parameters by environment variables.
560

561
`ports` indicates that TCP port 7164 - 7167 are exposed for communication between `pserver` ans trainer. port starts continously from `CONF_PADDLE_PORT` (7164) to `CONF_PADDLE_PORT + CONF_PADDLE_PORTS_NUM + CONF_PADDLE_PORTS_NUM_SPARSE - 1` (7167). We use multiple ports for dense and sparse paramter updates to improve latency.
562

563 564 565 566
Run following command to launch the job.
```
kubectl --kubeconfig=kubeconfig create -f paddle-claster-job.yaml
```
567

568
Inspect individual pods
569

570 571 572 573 574 575 576
```
$ kubectl --kubeconfig=kubeconfig get pods
NAME                       READY     STATUS    RESTARTS   AGE
paddle-cluster-job-cm469   1/1       Running   0          9m
paddle-cluster-job-fnt03   1/1       Running   0          9m
paddle-cluster-job-jx4xr   1/1       Running   0          9m
```
577

578
Inspect individual console output
579
```
580
kubectl --kubeconfig=kubeconfig log -f POD_NAME
581 582
```

583
`POD_NAME`: name of any pod (e.g., `paddle-cluster-job-cm469`).
584

585
Run `kubectl --kubeconfig=kubeconfig describe job paddle-cluster-job` to check training job status. It will complete in around 20 minutes.
586

H
Helin Wang 已提交
587
The details for start `pserver` and `trainer` are hidden inside docker image `paddledev/paddle-tutorial:k8s_train`, see [here](src/k8s_train/README.md) for how to build the docker image and source code.
588

589
#### Inspect Training Output
590

591
Training output (model snapshot and logs) will be saved in EFS. We can ssh into worker EC2 instance, mount EFS and check training output.
592

593
1. ssh Into Worker EC2 instance
594
```
595 596
chmod 400 key-name.pem
ssh -i key-name.pem core@INSTANCE_IP
597 598
```

599
`INSTANCE_IP`: public IP address of EC2 kubernetes worker node. Go to [EC2 console](https://us-west-2.console.aws.amazon.com/ec2/v2/home?region=us-west-2#Instances:sort=instanceId) and check `public IP` of any `paddle-cluster-kube-aws-worker` instance.
600

601
2. Mount EFS
602
```
603 604
mkdir efs
sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 EFS_DNS_NAME:/ efs
605 606
```

607
`EFS_DNS_NAME`: DNS name as shown in description of `paddle-efs` that we created. Look similar to `fs-2cbf7385.efs.us-west-2.amazonaws.com`.
608

609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631
Now folder `efs` will have structure similar to:
```
-- paddle-cluster-job
    |-- ...
    |-- output
    |   |-- node_0
    |   |   |-- server.log
    |   |   `-- train.log
    |   |-- node_1
    |   |   |-- server.log
    |   |   `-- train.log
    |   |-- node_2
    |   |   |-- server.log
    |   |   `-- train.log
    |   |-- pass-00000
    |   |   |-- ___fc_layer_0__.w0
    |   |   |-- ___fc_layer_0__.wbias
    |   |   |-- done
    |   |   |-- path.txt
    |   |   `-- trainer_config.lr.py
	|   |-- pass-00001...
```
`server.log` contains log for `pserver`. `train.log` contains log for `trainer`. Model description and snapshot is stored in `pass-0000*`.
632

633
### Kubernetes Cluster Tear Down
634

635 636 637 638 639 640 641
#### Delete EFS

Go to [EFS Console](https://us-west-2.console.aws.amazon.com/efs/home?region=us-west-2) and delete the EFS volumn that we created.

#### Delete security group

Go to [Security Group Console](https://us-west-2.console.aws.amazon.com/ec2/v2/home?region=us-west-2#SecurityGroups:sort=groupId) and delete security group `paddle-efs`.
642

643 644 645 646 647 648

#### Delete S3 Bucket

Go to [S3 Console](https://console.aws.amazon.com/s3/home?region=us-west-2#) and delete the S3 bucket that we created.

#### Destroy Cluster
649

650
```
651
kube-aws destroy
652
```
653

654 655 656
The command will return immediately, but it might take 5 min to tear down the whole cluster.

You can go to [CludFormation Console](https://us-west-2.console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks?filter=active) to check destroy process.