[FLINK-9823] Add Kubernetes deployment ymls

The Kubernetes files contain a job-cluster service specification, a job specification
for the StandaloneJobClusterEntryPoint and a deployment for TaskManagers.

This closes #6320.
上级 56e5381c
# Apache Flink job cluster deployment on Kubernetes
## Build container image using Docker
In order to deploy a job cluster on Kubernetes, you first need to build a Docker image containing Flink and the user code jar.
Please follow the instructions you can find [here](../docker/README.md) to build a job container image.
## Deploy Flink job cluster
This directory contains a predefined K8s service and two template files for the job cluster entry point and the task managers.
The K8s service is used to let the cluster pods find each other.
If you start the Flink cluster in HA mode, then this is not necessary, because the HA implementation is used to detect leaders.
In order to use the template files, please replace the `${VARIABLES}` in the file with concrete values.
The files contain the following variables:
- `${FLINK_IMAGE_NAME}`: Name of the image to use for the container
- `${FLINK_JOB}`: Name of the Flink job to start (the user code jar must be included in the container image)
- `${FLINK_JOB_PARALLELISM}`: Degree of parallelism with which to start the Flink job and the number of required task managers
One way to substitute the variables is to use `envsubst`.
See [here]((https://stackoverflow.com/a/23622446/4815083)) for a guide to install it on Mac OS X.
In non HA mode, you should first start the job cluster service:
`kubectl create -f job-cluster-service.yaml`
In order to deploy the job cluster entrypoint run:
`FLINK_IMAGE_NAME=<job-image> FLINK_JOB=<job-name> FLINK_JOB_PARALLELISM=<parallelism> envsubst < job-cluster-job.yaml.template | kubectl create -f -`
Now you should see the `flink-job-cluster` job being started by calling `kubectl get job`.
At last, you should start the task manager deployment:
`FLINK_IMAGE_NAME=<job-image> FLINK_JOB_PARALLELISM=<parallelism> envsubst < task-manager-deployment.yaml.template | kubectl create -f -`
## Interact with Flink job cluster
After starting the job cluster service, the web UI will be available under `<NodeIP>:30081`.
You can then use the Flink client to send Flink commands to the cluster:
`bin/flink list -m <NodeIP:30081>`
## Terminate Flink job cluster
The job cluster entry point pod is part of the Kubernetes job and terminates once the Flink job reaches a globally terminal state.
Alternatively, you can also stop the job manually.
`kubectl delete job flink-job-cluster`
The task manager pods are part of the task manager deployment and need to be deleted manually by calling
`kubectl delete deployment flink-task-manager`
Last but not least you should also stop the job cluster service
`kubectl delete service flink-job-cluster`
################################################################################
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
apiVersion: batch/v1
kind: Job
metadata:
name: flink-job-cluster
spec:
template:
metadata:
labels:
app: flink
component: job-cluster
spec:
restartPolicy: OnFailure
containers:
- name: flink-job-cluster
image: ${FLINK_IMAGE_NAME}
args: ["job-cluster", "--job-classname", "${FLINK_JOB}", "-Djobmanager.rpc.address=flink-job-cluster",
"-Dparallelism.default=${FLINK_JOB_PARALLELISM}", "-Dblob.server.port=6124", "-Dquery.server.ports=6125"]
ports:
- containerPort: 6123
name: rpc
- containerPort: 6124
name: blob
- containerPort: 6125
name: query
- containerPort: 8081
name: ui
################################################################################
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
apiVersion: v1
kind: Service
metadata:
name: flink-job-cluster
labels:
app: flink
component: job-cluster
spec:
ports:
- name: rpc
port: 6123
- name: blob
port: 6124
- name: query
port: 6125
nodePort: 30025
- name: ui
port: 8081
nodePort: 30081
type: NodePort
selector:
app: flink
component: job-cluster
################################################################################
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: flink-task-manager
spec:
replicas: ${FLINK_JOB_PARALLELISM}
template:
metadata:
labels:
app: flink
component: task-manager
spec:
containers:
- name: flink-task-manager
image: ${FLINK_IMAGE_NAME}
args: ["task-manager", "-Djobmanager.rpc.address=flink-job-cluster"]
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册