@@ -66,5 +66,11 @@ Please refer to [this guide](https://github.com/alibaba/inclavare-containers/blo
---
## Run rune containers in Kubernetes cluster
Please refer to [this guide](docs/develop_and_deploy_hello_world_application_in_kubernetes_cluster.md) to develop and deploy a rune container in a Kubernetes cluster.
---
## Reference container image
[The reference container images](https://hub.docker.com/u/inclavarecontainers) are available for the demonstration purpose to show how a Confidential Computing Kubernetes Cluster with Inclavare Containers works. Currently, web application demos based on OpenJDK 11 and Golang are provided.
# Create a confidential computing Kubernetes cluster with inclavare-containers
This page shows how to create a single control-plane Kubernetes and install the software required to run rune containers with Occlum in a Kubernetes cluster.
## Before you begin
- A machine with Intel SGX hardware support.
- Make sure you have one of the following operating systems:
- Ubuntu 18.04 server 64bits
- CentOS 7.5 64bits
- Download the packages or binaries corresponding to your operating system from the [releases page](https://github.com/alibaba/inclavare-containers/releases).
**Note:** The SGX SDK and PSW installers on Ubuntu operating system are available from [Intel](https://download.01.org/intel-sgx/sgx-linux/2.9.1/distro/ubuntu18.04-server/).
## Objectives
- Install the Intel SGX software stack.
- Install kernel module enable-rdfsbase and occlum-pal for Occlum.
- Create a single control-plane Kubernetes cluster for running rune containers with Occlum.
## Instructions
### 1. Install Linux SGX software stack
The Linux SGX software stack is comprised of Intel SGX driver, Intel SGX SDK, and Intel SGX PSW.
- Step 1. Build and install the Intel SGX driver
Please refer to the [documentation](https://github.com/intel/linux-sgx-driver#build-and-install-the-intelr-sgx-driver) to build and install the Intel SGX driver. It is recommended that the version equal to or greater than `sgx_driver_2.5`.
Please refer to the [documentation](https://github.com/alibaba/inclavare-containers/blob/master/docs/running_rune_with_occlum.md#install-inclavare-containers-binary) to install SGX SDK and SGX PSW.
- Step 3. Check the aesmd daemon status
Make sure the aesmd daemon is started and running. The expected result is as following:
```
$ systemctl status aesmd.service
● aesmd.service - Intel(R) Architectural Enclave Service Manager
[Occlum](https://github.com/occlum/occlum) is the only enclave runtime supported by shim-rune currently. `enable-rdfsdbase` and `occlum-pal` are used by Occlum.<br/>
`enable-rdfsdbase` is a Linux kernel module that enables RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions on x86.
`occlum-pal` is used to interface with OCI Runtime rune, allowing invoking Occlum through well-defined [Enclave Runtime APL API v2](https://github.com/alibaba/inclavare-containers/blob/master/rune/libenclave/internal/runtime/pal/spec_v2.md).
- Step 1. Install kernel module enable-rdfsdbase
Please follow the [documentation](https://github.com/occlum/enable_rdfsbase) to install `enable-rdfsdbase`.
`runc` and `rune` are CLI tools for spawning and running containers according to the OCI specification. The codebase of the `rune` is a fork of [runc](https://github.com/opencontainers/runc), so `rune` can be used as `runc` if enclave is not configured or available. The difference between them is `rune` can run a so-called enclave which is referred to as protected execution environment, preventing the untrusted entity from accessing the sensitive and confidential assets in use in containers.<br/>
<br/>
- Step1. Download the `runc` binary and save to path `/usr/bin/runc`
`shim-rune` resides in between `containerd` and `rune`, conducting enclave signing and management beyond the normal `shim` basis. `shim-rune` and `rune` can compose a basic enclave containerization stack for the cloud-native ecosystem.
- On CentOS
```bash
version=0.3.0-1
sudo rpm -ivh shim-rune-${version}.el7.x86_64.rpm
```
- On Ubuntu
```bash
version=0.3.0-1
sudo dpkg -i shim-rune_${version}_amd64.deb
```
### 5. Install and configure containerd
containerd is an industry-standard container runtime with an emphasis on simplicity, robustness and portability. It is available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments, etc.<br/>You can download one of the containerd binaries on the [Download](https://containerd.io/downloads/) page.
- Step 1. Download and install containerd-1.3.4 as follows:
- Step 4. Enable and restart the containerd.service
```bash
sudo systemctl enable containerd.service
sudo systemctl restart containerd.service
```
- Step 5. Download the Occlum SDK image (Optional)
It is recommended to download the occlum SDK image in advance, which is configured in the filed `enclave_runtime.occlum.build_image` in `/etc/inclavare-containers/config.toml` . This image will be used when creating pods. Note that downloading this image in advance can save the container launch time. <br />Run the following command to download the Occlum SDK image:
### 6. Create a single control-plane Kubernetes cluster with kubeadm
- Step 1. Set the kernel parameters
Make sure that the `br_netfilter` module is loaded and both `net.bridge.bridge-nf-call-iptables` and `net.ipv4.ip_forward` are set to 1 in your sysctl config.
```bash
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
```
- Step 2. Configure the kubernets package repository for downloading kubelet, kubeadm and kubelet
Set SELinux in permissive mode and install kubelet, kubeadm and kubectl of version v1.16.9, you can choose other versions you like, but it is recommend that you use the versions greater than or equal to v1.16.
- Step 4. Configure the kubelet configuration file
Configure the kubelet configuration file `10-kubeadm.conf`, specify the runtime to containerd by arguments `--container-runtime=remote` and `--container-runtime-endpoint`.
- Step 6. Initialize the Kubernetes cluster with kubeadm
The version of Kubernetes must match with the kubelet version. You can specify the Kubernetes Pod and Service CIDR block with arguments `pod-network-cidr` and `service-cidr`, and make sure the CIDRs are not conflict with the host IP address. For example, if the host IP address is `192.168.1.100`, you can initialize the cluster as follows:
# Develop and deploy a "Hello World" container in Kubernetes cluster
This page shows how to develop a "Hello World" application, build a "Hello World" image and run a "Hello World" container in a Kubernetes cluster.
## Before you begin
- You need to have a Kubernetes cluster and the nodes' hardware in the cluster must support Intel SGX. If you do not already have a cluster, you can create one following the documentation [Create a confidential computing Kubernetes cluster with inclavare-containers](create_a_confidential_computing_kubernetes_cluster_with_inclavare_containers.md).
- Make sure you have one of the following operating systems:
- Ubuntu 18.04 server 64bits
- CentOS 7.5 64bits
## Objectives
- Develop a "Hello World" occlum application in an occlum SDK container.
- Build a "Hello World" image from the application.
- Run the "Hello World" Pod in Kubernetes cluster.
## Instructions
### 1. Create a Pod with occlum SDK image
Occlum supports running any executable binaries that are based on [musl libc](https://www.musl-libc.org/). It does not support Glibc. A good way to develop occlum applications is in an occlum SDK container.
You can choose one suitable occlum SDK image from the list in [this page](https://hub.docker.com/r/occlum/occlum/tags), the version of the Occlum SDK image must be same as the occlum version listed in release page.
- Step 1. Apply the following yaml file
```yaml
cat << EOF | kubectl apply -f -
apiVersion:v1
kind:Pod
metadata:
labels:
run:occlum-app-builder
name:occlum-app-builder
namespace:default
spec:
hostNetwork:true
containers:
-command:
-sleep
-infinity
image:docker.io/occlum/occlum:0.14.0-centos7.5
imagePullPolicy:IfNotPresent
securityContext:
privileged:true
name:occlum-app-builder
EOF
```
This will create a Pod with image `docker.io/occlum/occlum:0.14.0-centos7.5` and the filed `securityContext.privileged` should be set to `true` in order to build and push docker image in container.<br/>
- Step 2. Wait for the pod status to `Ready`
It will take about one minute to create the pod, you need to check and wait for the pod status to `Ready` . Run command `kubectl get pod occlum-app-builder`, the output looks like this:
Install docker following the [documentation](https://docs.docker.com/engine/install/centos/). Note that the `systemd` is not installed in the container by default, so you can't manage docker service by `systemd`.
- Step 5. Start the docker service by the following command:
```bash
nohup dockerd -b docker0 --storage-driver=vfs &
```
- Step 6. Make sure the docker service started
Run command `docker ps`, the output should be like this:
```
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```
### 2. Develop the "Hello World" application in the container
If you were to write an SGX Hello World project using some SGX SDK, the project would consist of hundreds of lines of code. And to do that, you have to spend a great deal of time to learn the APIs, the programming model, and the built system of the SGX SDK.<br/>Thanks to Occlum, you can be freed from writing any extra SGX-aware code and only need to type some simple commands to protect your application with SGX transparently.
- Step 1. Create a working directory in the container
- Step 2. Write the "Hello World" code in C language:
```c
cat<<EOF>/root/occlum_workspace/hello_world.c
#include <stdio.h>
#include <unistd.h>
intmain(){
while(1){
printf("Hello World!\n");
fflush(stdout);
sleep(5);
}
}
EOF
```
- Step 3. Compile the user program with the Occlum toolchain (e.g., `occlum-gcc`)
```bash
occlum-gcc -o hello_world hello_world.c
```
- Step 4. Initialize a directory as the Occlum context via `occlum init`
```bash
mkdir occlum_context &&cd occlum_context
occlum init
```
The `occlum init` command creates in the current working directory a new directory named `.occlum`, which contains the compile-time and run-time state of Occlum. Each Occlum context should be used for a single instance of an application; multiple applications or different instances of a single application should use different Occlum contexts.<br/>
- Step 5. Generate a secure Occlum FS image and Occlum SGX enclave via `occlum build`
```bash
cp ../hello_world image/bin/
occlum build
```
The content of the `image` directory is initialized by the `occlum init` command. The structure of the `image` directory mimics that of an ordinary UNIX FS, containing directories like `/bin`, `/lib`, `/root`, `/tmp`, etc. After copying the user program `hello_world` into `image/bin/`, the `image` directory is packaged by the `occlum build` command to generate a secure Occlum FS image as well as the Occlum SGX enclave.
- Step 6. Run the user program inside an SGX enclave via `occlum run`
```
occlum run /bin/hello_world
```
The `occlum run` command starts up an Occlum SGX enclave, which, behind the scene, verifies and loads the associated occlum FS image, spawns a new LibOS process to execute `/bin/hello_world`, and eventually prints the message.
### 3. Build the "Hello World" image
- Step 1. Write the Dockerfile
```dockerfile
cat << EOF >Dockerfile
FROM scratch
ADD image /
ENTRYPOINT ["/bin/hello_world"]
EOF
```
It is recommended that you use the scratch as the base image. The scratch image is an empty image, it makes the docker image size small enough, which means a much smaller Trusted Computing Base (TCB) and attack surface. `ADD image /` add the occlum image directory into the root directory of the docker image, `ENTRYPOINT ["/bin/hello_world"]` set the command `/bin/hello_world` as the container entry point.
- Step 2. Build and push the "Hello World" image to your docker registry
Build and push the image to your docker registry. For example, you create a docker repository named occlum-hello-world in namespace inclavarecontainers, then you can push the image to `docker.io/inclavarecontainers/occlum-hello-world:scratch`.
**Note**: The field `runtimeClassName` should be set to `rune` which means the container will be handled by rune, specify the environment `RUNE_CARRIER` to `occlum` telling the `shim-rune` to create and run an occlum application.<br />
<br/>You can also configure enclave through these environment variables:
| Environment Variable Name | Default Value |
| --- | --- |
| OCCLUM_USER_SPACE_SIZE | 256MB |
| OCCLUM_KERNEL_SPACE_HEAP_SIZE | 32MB |
| OCCLUM_KERNEL_SPACE_STACK_SIZE | 1MB |
| OCCLUM_MAX_NUM_OF_THREADS | 32 |
| OCCLUM_PROCESS_DEFAULT_STACK_SIZE | 4MB |
| OCCLUM_PROCESS_DEFAULT_HEAP_SIZE | 32MB |
| OCCLUM_PROCESS_DEFAULT_MMAP_SIZE | 80MB |
| OCCLUM_DEFAULT_ENV | OCCLUM=yes |
| OCCLUM_UNTRUSTED_ENV | EXAMPLE |
- Step 2. Wait for the pod status to `Ready`
```yaml
kubectl get pod helloworld
```
- Step 3. Print the container's logs via `kubectl logs`
Execute the command `kubectl logs -f helloworld`, a line "Hello world" will be printed on the terminal every 5 seconds. The output looks like this:
```
$ kubectl logs -f helloworld
Hello World!
Hello World!
Hello World!
```
## Cleanup
Use the following commands to delete the two pods `helloworld` and `occlum-app-builder`