提交 06c82238 编写于 作者: J jiazhiguang

update the shim README.md and add two articles to introduce how to create and...

update the shim README.md and add two articles to introduce how to create and run confidential computing container
上级 4ca4d3ca
......@@ -63,3 +63,6 @@ For more information about Enclave Runtime PAL API, please refer to [Enclave Run
### Run OCI bundle
Please refer to [this guide](https://github.com/alibaba/inclavare-containers/blob/master/docs/running_rune_with_occlum_bundle.md) to run `occlum bundle` with `rune`.
## Run rune containers in Kubernetes cluster
Please refer to [this guide](docs/develop_and_deploy_hello_world_application_in_kubernetes_cluster.md) to develop and deploy a rune container in a Kubernetes cluster.
\ No newline at end of file
# Create a confidential computing Kubernetes cluster with inclavare-containers
This page shows how to create a single control-plane Kubernetes and install the software required to run rune containers with Occlum in a Kubernetes cluster.
## Before you begin
- A machine with Intel SGX hardware support.
- Make sure you have one of the following operating systems:
- Ubuntu 18.04 server 64bits
- CentOS 7.5 64bits
- Download the packages or binaries corresponding to your operating system from the [releases page](https://github.com/alibaba/inclavare-containers/releases).
| Module Name | CentOS | Ubuntu |
| --- | --- | --- |
| occlum-pal | occlum-pal-${version}.el7.x86_64.rpm | occlum-pal_${version}_amd64.deb |
| shim-rune | shim-rune-${version}.el7.x86_64.rpm | shim-rune_${version}_amd64.deb |
| rune | rune-${version}.el7.x86_64.rpm | rune_${version}_amd64.deb |
| SGX SDK | sgx_linux_x64_sdk.bin | - |
| SGX PSW | sgx_linux_x64_psw.bin | - |
**Note:** The SGX SDK and PSW installers on Ubuntu operating system are available from [Intel](https://download.01.org/intel-sgx/sgx-linux/2.9.1/distro/ubuntu18.04-server/).
## Objectives
- Install the Intel SGX software stack.
- Install kernel module enable-rdfsbase and occlum-pal for Occlum.
- Create a single control-plane Kubernetes cluster for running rune containers with Occlum.
## Instructions
### 1. Install Linux SGX software stack
The Linux SGX software stack is comprised of Intel SGX driver, Intel SGX SDK, and Intel SGX PSW.
- Step 1. Build and install the Intel SGX driver
Please refer to the [documentation](https://github.com/intel/linux-sgx-driver#build-and-install-the-intelr-sgx-driver) to build and install the Intel SGX driver. It is recommended that the version equal to or greater than `sgx_driver_2.5`.
- Step 2. Install Intel SGX SDK and Intel Platform Software
Please refer to the [documentation](https://github.com/alibaba/inclavare-containers/blob/master/docs/running_rune_with_occlum.md#install-inclavare-containers-binary) to install SGX SDK and SGX PSW.
- Step 3. Check the aesmd daemon status
Make sure the aesmd daemon is started and running. The expected result is as following:
```
$ systemctl status aesmd.service
● aesmd.service - Intel(R) Architectural Enclave Service Manager
Loaded: loaded (/usr/lib/systemd/system/aesmd.service; enabled; vendor preset: disabled)
Active: active (running) since 2020-07-01 22:45:10 CST; 12h ago
Process: 30597 ExecStart=/opt/intel/sgxpsw/aesm/aesm_service (code=exited, status=0/SUCCESS)
Process: 30590 ExecStartPre=/bin/chmod 0750 /var/opt/aesmd/ (code=exited, status=0/SUCCESS)
...
```
### 2. Install Occlum software stack
[Occlum](https://github.com/occlum/occlum) is the only enclave runtime supported by shim-rune currently. `enable-rdfsdbase` and `occlum-pal` are used by Occlum.<br />
`enable-rdfsdbase` is a Linux kernel module that enables RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions on x86.
`occlum-pal` is used to interface with OCI Runtime rune, allowing invoking Occlum through well-defined [Enclave Runtime APL API v2](https://github.com/alibaba/inclavare-containers/blob/master/rune/libenclave/internal/runtime/pal/spec_v2.md).
- Step 1. Install kernel module enable-rdfsdbase
Please follow the [documentation](https://github.com/occlum/enable_rdfsbase) to install `enable-rdfsdbase`.
- Step 2. Install occlum-pal
- On CentOS
```bash
version=0.14.0-1
sudo rpm -ivh occlum-pal-${version}.el7.x86_64.rpm
```
- On Ubuntu
```bash
version=0.14.0-1
sudo dpkg -i occlum-pal_${version}_amd64.deb
```
### 3. Install runc and rune
`runc` and `rune` are CLI tools for spawning and running containers according to the OCI specification. The codebase of the `rune` is a fork of [runc](https://github.com/opencontainers/runc), so `rune` can be used as `runc` if enclave is not configured or available. The difference between them is `rune` can run a so-called enclave which is referred to as protected execution environment, preventing the untrusted entity from accessing the sensitive and confidential assets in use in containers.<br />
<br />
- Step1. Download the `runc` binary and save to path `/usr/bin/runc`
```bash
wget https://github.com/opencontainers/runc/releases/download/v1.0.0-rc90/runc.amd64 -O /usr/bin/runc
chmod +x /usr/bin/runc
```
- Step 2. Download and install the `rune` package
- On CentOS
```bash
version=0.3.0-1
sudo yum install -y libseccomp
sudo rpm -ivh rune-${version}.el7.x86_64.rpm
```
- On Ubuntu
```bash
version=0.3.0-1
sudo dpkg -i rune_${version}_amd64.deb
```
### 4. Install shim-rune
`shim-rune` resides in between `containerd` and `rune`, conducting enclave signing and management beyond the normal `shim` basis. `shim-rune` and `rune` can compose a basic enclave containerization stack for the cloud-native ecosystem.
- On CentOS
```bash
version=0.3.0-1
sudo rpm -ivh shim-rune-${version}.el7.x86_64.rpm
```
- On Ubuntu
```bash
version=0.3.0-1
sudo dpkg -i shim-rune_${version}_amd64.deb
```
### 5. Install and configure containerd
containerd is an industry-standard container runtime with an emphasis on simplicity, robustness and portability. It is available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments, etc.<br />You can download one of the containerd binaries on the [Download](https://containerd.io/downloads/) page.
- Step 1. Download and install containerd-1.3.4 as follows:
```bash
curl -LO https://github.com/containerd/containerd/releases/download/v1.3.4/containerd-1.3.4.linux-amd64.tar.gz
tar -xvf containerd-1.3.4.linux-amd64.tar.gz
cp bin/* /usr/local/bin
```
- Step 2. Configure the containerd.service
You can use systemd to manage the containerd daemon,  and place the `containerd.service` to  `/etc/systemd/system/containerd.service`.
```bash
cat << EOF >/etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF
```
- Step 3. Configure the containerd configuration
The daemon also uses a configuration file located in `/etc/containerd/config.toml` for specifying daemon level options.
```bash
mkdir /etc/containerd
cat << EOF >/etc/containerd/config.toml
[plugins]
[plugins.cri]
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/acs/pause-amd64:3.1"
[plugins.cri.containerd]
snapshotter = "overlayfs"
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/bin/runc"
runtime_root = ""
[plugins.cri.containerd.runtimes.rune]
runtime_type = "io.containerd.rune.v2"
EOF
```
- Step 4. Enable and restart the containerd.service
```bash
sudo systemctl enable containerd.service
sudo systemctl restart containerd.service
```
- Step 5. Download the Occlum SDK image (Optional)
It is recommended to download the occlum SDK image in advance, which is configured in the filed `enclave_runtime.occlum.build_image` in `/etc/inclavare-containers/config.toml` . This image will be used when creating pods. Note that downloading this image in advance can save the container launch time.  <br />Run the following command to download the Occlum SDK image:
```bash
ctr image pull docker.io/occlum/occlum:0.14.0-ubuntu18.04
```
### 6. Create a single control-plane Kubernetes cluster with kubeadm
- Step 1. Set the kernel parameters
Make sure that the `br_netfilter` module is loaded and both `net.bridge.bridge-nf-call-iptables` and `net.ipv4.ip_forward` are set to 1 in your sysctl config.
```bash
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
```
- Step 2. Configure the kubernets package repository for downloading kubelet, kubeadm and kubelet
- On CentOS
```bash
cat << EOF >/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
```
- On Ubuntu
```bash
sudo apt update && sudo apt install -y apt-transport-https curl
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" >>/etc/apt/sources.list.d/kubernetes.list
```
- Step 3. Install kubelet, kubeadm and kubectl
Set SELinux in permissive mode and install kubelet, kubeadm and kubectl of version v1.16.9, you can choose other versions you like, but it is recommend that you use the versions greater than or equal to v1.16.
- On CentOS
```bash
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
kubernetes_version=1.16.9
sudo yum install -y --setopt=obsoletes=0 kubelet-${kubernetes_version} \
kubeadm-${kubernetes_version} kubectl-${kubernetes_version} \
--disableexcludes=kubernetes
```
- On Ubuntu
```bash
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
kubernetes_version=1.16.9
sudo apt update && apt install -y kubelet=${kubernetes_version}-00 \
kubeadm=${kubernetes_version}-00 kubectl=${kubernetes_version}-00
```
- Step 4. Configure the kubelet configuration file
Configure the kubelet configuration file `10-kubeadm.conf`, specify the runtime to containerd by arguments `--container-runtime=remote` and `--container-runtime-endpoint`.
- On CentOS
```bash
cat << EOF >/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_SYSTEM_PODS_ARGS=--max-pods 64 --pod-manifest-path=/etc/kubernetes/manifests"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/acs/pause-amd64:3.0 --cluster-domain=cluster.local --cloud-provider=external"
Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock"
ExecStart=
ExecStart=/usr/bin/kubelet \$KUBELET_KUBECONFIG_ARGS \$KUBELET_CONFIG_ARGS \$KUBELET_SYSTEM_PODS_ARGS \$KUBELET_NETWORK_ARGS \$KUBELET_DNS_ARGS \$KUBELET_EXTRA_ARGS
EOF
```
- On Ubuntu
```bash
# Note: To avoid forwarding loop, create a new nameserver instead of the default loopback address 127.0.0.53
cat << EOF >/etc/resolv.conf.kubernetes
nameserver 8.8.8.8
options timeout:2 attempts:3 rotate single-request-reopen
EOF
cat << EOF >/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_SYSTEM_PODS_ARGS=--max-pods 64 --pod-manifest-path=/etc/kubernetes/manifests"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/acs/pause-amd64:3.0 --cluster-domain=cluster.local --cloud-provider=external --resolv-conf=/etc/resolv.conf.kubernetes"
Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock"
ExecStart=
ExecStart=/usr/bin/kubelet \$KUBELET_KUBECONFIG_ARGS \$KUBELET_CONFIG_ARGS \$KUBELET_SYSTEM_PODS_ARGS \$KUBELET_NETWORK_ARGS \$KUBELET_DNS_ARGS \$KUBELET_EXTRA_ARGS
EOF
```
- Step 5. Enable the kubelet.service
```bash
sudo systemctl enable kubelet.service
```
- Step 6. Initialize the Kubernetes cluster with kubeadm
The version of Kubernetes must match with the kubelet version. You can specify the Kubernetes Pod and Service CIDR block with arguments `pod-network-cidr`  and `service-cidr`,  and make sure the CIDRs are not conflict with the host IP address.  For example, if the host IP address is `192.168.1.100`,  you can initialize the cluster as follows:
```bash
kubeadm init --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version=v1.16.9 \
--pod-network-cidr="172.21.0.0/20" --service-cidr="172.20.0.0/20"
```
- Step 7. Configure kubeconfig
To make kubectl work, run these commands, which are also part of the `kubeadm init` output:
```bash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
- Step 8. Install the network addon
Install the network addon `flannel`  and wait for the node status to `Ready`.
```bash
kubectl taint nodes $(hostname | tr 'A-Z' 'a-z') node.cloudprovider.kubernetes.io/uninitialized-
kubectl taint nodes $(hostname | tr 'A-Z' 'a-z') node-role.kubernetes.io/master-
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
```
- Step 9. Check the pod status
Check the pod status with command `kubectl get pod -A`  and wait until all pods status are `Ready` , the output should like this:
```
$ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-67c766df46-bzmwx 1/1 Running 0 74s
kube-system coredns-67c766df46-l6blz 1/1 Running 0 74s
kube-system etcd-izuf68q2tx28s7tel52vb0z 1/1 Running 0 20s
kube-system kube-apiserver-izuf68q2tx28s7tel52vb0z 1/1 Running 0 12s
kube-system kube-controller-manager-izuf68q2tx28s7tel52vb0z 1/1 Running 0 28s
kube-system kube-flannel-ds-amd64-s542d 1/1 Running 0 56s
kube-system kube-proxy-fpwnh 1/1 Running 0 74s
kube-system kube-scheduler-izuf68q2tx28s7tel52vb0z 1/1 Running 0 20s
```
### 7. Configure RuntimeClass
- Step 1. Apply the following two yaml files to create `runc` and `rune` RuntimeClass objects
```yaml
cat << EOF | kubectl apply -f -
apiVersion: node.k8s.io/v1beta1
handler: runc
kind: RuntimeClass
metadata:
name: runc
EOF
```
```yaml
cat << EOF | kubectl apply -f -
apiVersion: node.k8s.io/v1beta1
handler: rune
kind: RuntimeClass
metadata:
name: rune
EOF
```
- Step 2. Make sure the `runc` and `rune` RuntimeClass objects are created
List the runtimeClasses with command `kubectl get runtimeclass`  and the output should like this:
```
$ kubectl get runtimeclass
NAME CREATED AT
runc 2020-05-06T06:57:51Z
rune 2020-05-06T06:57:48Z
```
## What's Next
- [Develop and deploy a "Hello World" container in Kubernetes cluster](develop_and_deploy_hello_world_application_in_kubernetes_cluster.md)
# Develop and deploy a "Hello World" container in Kubernetes cluster
This page shows how to develop a "Hello World" application, build a "Hello World" image and run a "Hello World" container in a Kubernetes cluster.
## Before you begin
- You need to have a Kubernetes cluster and the nodes' hardware in the cluster must support Intel SGX. If you do not already have a cluster, you can create one following the documentation [Create a confidential computing Kubernetes cluster with inclavare-containers](create_a_confidential_computing_kubernetes_cluster_with_inclavare_containers.md).
- Make sure you have one of the following operating systems:
- Ubuntu 18.04 server 64bits
- CentOS 7.5 64bits
## Objectives
- Develop a "Hello World" occlum application in an occlum SDK container.
- Build a "Hello World" image from the application.
- Run the "Hello World" Pod in Kubernetes cluster.
## Instructions
### 1. Create a Pod with occlum SDK image
Occlum supports running any executable binaries that are based on [musl libc](https://www.musl-libc.org/). It does not support Glibc. A good way to develop occlum applications is in an occlum SDK container.
You can choose one suitable occlum SDK image from the list in [this page](https://hub.docker.com/r/occlum/occlum/tags), the version of the Occlum SDK image must be same as the occlum version listed in release page.
- Step 1. Apply the following yaml file
```yaml
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
labels:
run: occlum-app-builder
name: occlum-app-builder
namespace: default
spec:
hostNetwork: true
containers:
- command:
- sleep
- infinity
image: docker.io/occlum/occlum:0.14.0-centos7.5
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
name: occlum-app-builder
EOF
```
This will create a Pod with image `docker.io/occlum/occlum:0.14.0-centos7.5` and the filed `securityContext.privileged` should be set to `true`  in order to build and push docker image in container.<br />
- Step 2. Wait for the pod status to `Ready`
It will take about one minute to create the pod, you need to check and wait for the pod status to `Ready` . Run command `kubectl get pod occlum-app-builder`, the output looks like this:
```bash
$ kubectl get pod occlum-app-builder
NAME READY STATUS RESTARTS AGE
occlum-app-builder 1/1 Running 0 15s
```
- Step 3. Login the occlum-app-builder container
```bash
kubectl exec -it occlum-app-builder -c occlum-app-builder -- /bin/bash
```
- Step 4. Install docker in the container
Install docker following the [documentation](https://docs.docker.com/engine/install/centos/). Note that the `systemd` is not installed in the container by default, so you can't manage docker service by `systemd`.
- Step 5. Start the docker service by the following command:
```bash
nohup dockerd -b docker0 --storage-driver=vfs &
```
- Step 6. Make sure the docker service started
Run command `docker ps`, the output should be like this:
```
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```
### 2. Develop the "Hello World" application in the container
If you were to write an SGX Hello World project using some SGX SDK, the project would consist of hundreds of lines of code. And to do that, you have to spend a great deal of time to learn the APIs, the programming model, and the built system of the SGX SDK.<br />Thanks to Occlum, you can be freed from writing any extra SGX-aware code and only need to type some simple commands to protect your application with SGX transparently.
- Step 1. Create a working directory in the container
```c
mkdir /root/occlum_workspace && cd /root/occlum_workspace/
```
- Step 2. Write the "Hello World" code in C language:
```c
cat << EOF > /root/occlum_workspace/hello_world.c
#include <stdio.h>
#include <unistd.h>
int main() {
while(1){
printf("Hello World!\n");
fflush(stdout);
sleep(5);
}
}
EOF
```
- Step 3. Compile the user program with the Occlum toolchain (e.g., `occlum-gcc`)
```bash
occlum-gcc -o hello_world hello_world.c
```
- Step 4. Initialize a directory as the Occlum context via `occlum init`
```bash
mkdir occlum_context && cd occlum_context
occlum init
```
The `occlum init` command creates in the current working directory a new directory named `.occlum`, which contains the compile-time and run-time state of Occlum. Each Occlum context should be used for a single instance of an application; multiple applications or different instances of a single application should use different Occlum contexts.<br />
- Step 5. Generate a secure Occlum FS image and Occlum SGX enclave via `occlum build`
```bash
cp ../hello_world image/bin/
occlum build
```
The content of the `image` directory is initialized by the `occlum init` command. The structure of the `image` directory mimics that of an ordinary UNIX FS, containing directories like `/bin`, `/lib`, `/root`, `/tmp`, etc. After copying the user program `hello_world` into `image/bin/`, the `image` directory is packaged by the `occlum build` command to generate a secure Occlum FS image as well as the Occlum SGX enclave.
- Step 6. Run the user program inside an SGX enclave via `occlum run`
```
occlum run /bin/hello_world
```
The `occlum run` command starts up an Occlum SGX enclave, which, behind the scene, verifies and loads the associated occlum FS image, spawns a new LibOS process to execute `/bin/hello_world`, and eventually prints the message.
### 3. Build the "Hello World" image
- Step 1. Write the Dockerfile
```dockerfile
cat << EOF >Dockerfile
FROM scratch
ADD image /
ENTRYPOINT ["/bin/hello_world"]
EOF
```
It is recommended that you use the scratch as the base image. The scratch image is an empty image, it makes the docker image size small enough, which means a much smaller Trusted Computing Base (TCB) and attack surface. `ADD image /` add the occlum image directory into the root directory of the docker image, `ENTRYPOINT ["/bin/hello_world"]` set the command `/bin/hello_world` as the container entry point.
- Step 2. Build and push the "Hello World" image to your docker registry
Build and push the image to your docker registry. For example, you create a docker repository named occlum-hello-world in namespace inclavarecontainers, then you can push the image to `docker.io/inclavarecontainers/occlum-hello-world:scratch`.
```dockerfile
docker build -f "Dockerfile" -t "docker.io/inclavarecontainers/occlum-hello-world:scratch" .
docker push "docker.io/inclavarecontainers/occlum-hello-world:scratch"
```
### 4. Run the "Hello World" Container
- Step 1. Create the "Hello World" Pod
Exit from the occlum SDK container, apply the following yaml to create the "Hello World" Pod.
```yaml
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
labels:
run: helloworld
name: helloworld
spec:
runtimeClassName: rune
containers:
- command:
- /bin/hello_world
env:
- name: RUNE_CARRIER
value: occlum
image: docker.io/inclavarecontainers/occlum-hello-world:scratch
imagePullPolicy: IfNotPresent
name: helloworld
workingDir: /run/rune
EOF
```
**Note**: The field `runtimeClassName` should be set to `rune` which means the container will be handled by rune, specify the environment `RUNE_CARRIER` to `occlum` telling the `shim-rune`  to create and run an occlum application.<br />
<br />You can also configure enclave through these environment variables:
| Environment Variable Name | Default Value |
| --- | --- |
| OCCLUM_USER_SPACE_SIZE | 256MB |
| OCCLUM_KERNEL_SPACE_HEAP_SIZE | 32MB |
| OCCLUM_KERNEL_SPACE_STACK_SIZE | 1MB |
| OCCLUM_MAX_NUM_OF_THREADS | 32 |
| OCCLUM_PROCESS_DEFAULT_STACK_SIZE | 4MB |
| OCCLUM_PROCESS_DEFAULT_HEAP_SIZE | 32MB |
| OCCLUM_PROCESS_DEFAULT_MMAP_SIZE | 80MB |
| OCCLUM_DEFAULT_ENV | OCCLUM=yes |
| OCCLUM_UNTRUSTED_ENV | EXAMPLE |
- Step 2. Wait for the pod status to `Ready`
```yaml
kubectl get pod helloworld
```
- Step 3. Print the container's logs via `kubectl logs`
Execute the command `kubectl logs -f helloworld`, a line "Hello world" will be printed on the terminal every 5 seconds. The output looks like this:
```
$ kubectl logs -f helloworld
Hello World!
Hello World!
Hello World!
```
## Cleanup
Use the following commands to delete the two pods `helloworld` and `occlum-app-builder` 
```yaml
kubectl delete pod helloworld
kubectl delete pod occlum-app-builder
```
......@@ -16,7 +16,7 @@ Carrier is a abstract framework to build an enclave for the specified enclave ru
## Build requirements
Go 1.14.x or above.
Go 1.13.x or above.
## How to build and install
......@@ -46,8 +46,8 @@ sgx_tool_sign = "/opt/intel/sgxsdk/bin/x64/sgx_sign"
[enclave_runtime]
[enclave_runtime.occlum]
build_image = "docker.io/occlum/occlum:0.12.0-ubuntu18.04"
build_image = "docker.io/occlum/occlum:0.14.0-ubuntu18.04"
enclave_runtime_path = "/opt/occlum/build/lib/libocclum-pal.so.0.14.0"
[enclave_runtime.graphene]
```
......
......@@ -46,8 +46,8 @@ sgx_tool_sign = "/opt/intel/sgxsdk/bin/x64/sgx_sign"
[enclave_runtime]
[enclave_runtime.occlum]
build_image = "docker.io/occlum/occlum:0.12.0-ubuntu18.04"
build_image = "docker.io/occlum/occlum:0.14.0-ubuntu18.04"
enclave_runtime_path = "/opt/occlum/build/lib/libocclum-pal.so.0.14.0"
[enclave_runtime.graphene]
```
......
......@@ -8,6 +8,6 @@ sgx_tool_sign = "/opt/intel/sgxsdk/bin/x64/sgx_sign"
[enclave_runtime]
[enclave_runtime.occlum]
build_image = "docker.io/occlum/occlum:0.12.0-ubuntu18.04"
build_image = "docker.io/occlum/occlum:0.14.0-ubuntu18.04"
enclave_runtime_path = "/opt/occlum/build/lib/libocclum-pal.so.0.14.0"
[enclave_runtime.graphene]
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册