You can provide the following additional command line arguments to the cluster entrypoint:
-`--job-classname <job class name>`: Class name of the job to run. By default, the Flink class path is scanned for a JAR with a `Main-Class` or `program-class` manifest entry and chosen as the job class. Use this command line argument to manually set the job class. This argument is required in case that no or more than one JAR with such a manifest entry is available on the class path.
-`--job-id <job id>`: Manually set a Flink job ID for the job (default: `00000000000000000000000000000000`)
## Resuming from a savepoint
In order to resume from a savepoint, one needs to pass the savepoint path to the cluster entrypoint.
This can be achieved by adding `"--fromSavepoint", "<SAVEPOINT_PATH>"` to the `args` field in the [job-cluster-job.yaml.template](job-cluster-job.yaml.template).
Note that `<SAVEPOINT_PATH>` needs to be accessible from the `job-cluster-job` pod (e.g. adding it to the image or storing it on a DFS).
Additionally one can specify `"--allowNonRestoredState"` to allow that savepoint state is skipped which cannot be restored.
## Interact with Flink job cluster
After starting the job cluster service, the web UI will be available under `<NODE_IP>:30081`.
In the case of Minikube, `<NODE_IP>` equals `minikube ip`.
You can then use the Flink client to send Flink commands to the cluster:
`bin/flink list -m <NODE_IP:30081>`
## Terminate Flink job cluster
The job cluster entry point pod is part of the Kubernetes job and terminates once the Flink job reaches a globally terminal state.
Alternatively, you can also stop the job manually.
`kubectl delete job flink-job-cluster`
The task manager pods are part of the task manager deployment and need to be deleted manually by calling
`kubectl delete deployment flink-task-manager`
Last but not least you should also stop the job cluster service