未验证 提交 ea9daf44 编写于 作者: S stormgbs 提交者: GitHub

Merge pull request #21 from alibaba/shim

shim: open source code

要显示的变更太多。

To preserve performance only 1000 of 1000+ files are displayed.
# Golang CircleCI 2.0 configuration file
#
# Check https://circleci.com/docs/2.0/language-go/ for more details
version: 2
jobs:
build:
docker:
- image: circleci/golang:1.13
working_directory: /go/src/github.com/alibaba/shim-rune
steps:
- checkout
- setup_remote_docker
- run:
name: run tests
command: |
test -z $(go fmt ./...)
go vet -asmdecl=false ./...
go test -race -v ./...
- run:
name: build binary
command: |
make binary
- run:
name: golangci-lint
command: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.22.2
golangci-lint run --disable govet
markdown-lint:
docker: # run the steps with Docker
- image: circleci/node:12
working_directory: ~/shim-rune
steps:
- checkout
- setup_remote_docker
- run:
name: markdownlint
command: |
sudo npm install -g markdownlint-cli
markdownlint -i vendor -f **/*.md
- run:
name: use markdown-link-check(https://github.com/tcort/markdown-link-check) to check links in markdown files
command: |
sudo npm install -g markdown-link-check
set +e
for name in $(find . -name \*.md | grep -v vendor); do
if [ -f $name ]; then
markdown-link-check -q $name;
if [ $? -ne 0 ]; then
code=1
fi
fi
done
bash -c "exit $code";
- run:
name: markdown-spellcheck
command: |
sudo npm install -g markdown-spellcheck
find . -name \*.md | grep -v '^./vendor' | xargs mdspell --ignore-numbers --ignore-acronyms --en-us -r -x
workflows:
version: 2
ci:
jobs:
- build
- markdown-lint
rune/.git
build/
.idea/
_output/
shim/bin/
shim/.idea/
# Golang CircleCI 2.0 configuration file
#
# Check https://circleci.com/docs/2.0/language-go/ for more details
version: 2
jobs:
build:
docker:
- image: circleci/golang:1.13
working_directory: /go/src/github.com/alibaba/inclavare-containers/shim
steps:
- checkout
- setup_remote_docker
- run:
name: run tests
command: |
test -z $(go fmt ./...)
go vet -asmdecl=false ./...
go test -race -v ./...
- run:
name: build binary
command: |
make binary
- run:
name: golangci-lint
command: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.22.2
golangci-lint run --disable govet
markdown-lint:
docker: # run the steps with Docker
- image: circleci/node:12
working_directory: ~/shim
steps:
- checkout
- setup_remote_docker
- run:
name: markdownlint
command: |
sudo npm install -g markdownlint-cli
markdownlint -i vendor -f **/*.md
- run:
name: use markdown-link-check(https://github.com/tcort/markdown-link-check) to check links in markdown files
command: |
sudo npm install -g markdown-link-check
set +e
for name in $(find . -name \*.md | grep -v vendor); do
if [ -f $name ]; then
markdown-link-check -q $name;
if [ $? -ne 0 ]; then
code=1
fi
fi
done
bash -c "exit $code";
- run:
name: markdown-spellcheck
command: |
sudo npm install -g markdown-spellcheck
find . -name \*.md | grep -v '^./vendor' | xargs mdspell --ignore-numbers --ignore-acronyms --en-us -r -x
workflows:
version: 2
ci:
jobs:
- build
- markdown-lint
build/
.idea/
_output/
\ No newline at end of file
{
"default": true,
"MD013": { "line_length": 512 }
}
# markdown-spellcheck spelling configuration file
# Format - lines beginning # are comments
# global dictionary is at the start, file overrides afterwards
# one word per line, to define a file override use ' - filename'
# where filename is relative to this configuration file
sgx-device-plugin
Kubernetes
Alibaba
ACK-TEE
sgx-enabled
containerd
kube-scheduler
CONTRIBUTING.md
DaemonSet
LibOS
K8s
aesmd
aesm.socket
enable-aesm-socket-attach
sgx-device-plugin-enable-aesm-socket-attach
e.g.
yml
yaml
shim
shim-rune
rune
occlum
graphene
runelet
inclavare
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# Root directory of the project (absolute path).
ROOTDIR=$(dir $(abspath $(lastword $(MAKEFILE_LIST))))
# Base path used to install.
DESTDIR ?= /usr/local
ifneq "$(strip $(shell command -v go 2>/dev/null))" ""
GOOS ?= $(shell go env GOOS)
GOARCH ?= $(shell go env GOARCH)
else
ifeq ($(GOOS),)
# approximate GOOS for the platform if we don't have Go and GOOS isn't
# set. We leave GOARCH unset, so that may need to be fixed.
ifeq ($(OS),Windows_NT)
GOOS = windows
else
UNAME_S := $(shell uname -s)
ifeq ($(UNAME_S),Linux)
GOOS = linux
endif
ifeq ($(UNAME_S),Darwin)
GOOS = darwin
endif
ifeq ($(UNAME_S),FreeBSD)
GOOS = freebsd
endif
endif
else
GOOS ?= $$GOOS
GOARCH ?= $$GOARCH
endif
endif
# Project binaries.
COMMANDS=containerd-shim-rune-v2
GO_BUILD_FLAGS=
SHIM_CGO_ENABLED ?= 0
BINARIES=$(addprefix bin/,$(COMMANDS))
.PHONY: clean all build binaries help install uninstall
.DEFAULT: default
all: binaries
# Build a binary from a cmd.
bin/containerd-shim-rune-v2:
@echo "bin/containerd-shim-rune-v2"
@CGO_ENABLED=${SHIM_CGO_ENABLED} GOOS=${GOOS} go build ${GO_BUILD_FLAGS} -o bin/containerd-shim-rune-v2 ./cmd/containerd-shim-rune-v2
binaries: $(BINARIES) ## build binaries
clean: ## clean up binaries
@echo "$@"
@rm -f $(BINARIES)
install: ## install binaries
@echo "$@ $(BINARIES)"
@mkdir -p $(DESTDIR)/bin
@install $(BINARIES) $(DESTDIR)/bin
uninstall:
@echo "$@"
@rm -f $(addprefix $(DESTDIR)/bin/,$(notdir $(BINARIES)))
help: ## this help
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST) | sort
# containerd-shim-rune-v2
containerd-shim-rune-v2 is a shim for Inclavare Containers(runE).
## Introduction
![shim-rune](docs/images/shim-rune.png)
## Carrier Framework
Carrier is a abstract framework to build an enclave for the specified enclave runtime (Occlum、Graphene ..) .
![shim-carrier](docs/images/shim-carrier.png)
## Signature Framework
![shim-signature](docs/images/shim-signature.png)
## Build requirements
Go 1.14.x or above.
## How to build and install
### Step 1: Build and install shim binary.
```bash
mkdir -p $GOPATH/src/github.com/alibaba
cd $GOPATH/src/github.com/alibaba
git clone https://github.com/alibaba/inclavare-containers.git
cd shim
GOOS=linux make binaries
make install
ls -l /usr/local/bin/containerd-shim-rune-v2
```
### Step 2: Configuration
The Configuration file of Inclavare Containers MUST BE placed into `/etc/inclavare-containers/config.toml`
```toml
log_level = "debug" # "debug" "info" "warn" "error"
sgx_tool_sign = "/opt/intel/sgxsdk/bin/x64/sgx_sign"
[containerd]
socket = "/run/containerd/containerd.sock"
[enclave_runtime]
[enclave_runtime.occlum]
build_image = "docker.io/occlum/occlum:0.12.0-ubuntu18.04"
[enclave_runtime.graphene]
```
Modify containerd configuration file(/etc/containerd/config.toml) and add runtimes rune into it.
```toml
#...
[plugins.cri.containerd.runtimes.rune]
runtime_type = "io.containerd.rune.v2"
#...
```
Add RuntimeClass rune into your kubernetes cluster.
```bash
cat <<EOF | kubectl create -f -
apiVersion: node.k8s.io/v1beta1
kind: RuntimeClass
metadata:
name: rune
handler: rune
scheduling:
nodeSelector:
# Your rune worker labels.
#alibabacloud.com/container-runtime: rune
EOF
```
## Run HelloWorld in kubernetes
```bash
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
labels:
run: helloworld-in-tee
name: helloworld-in-tee
spec:
runtimeClassName: rune
containers:
- command:
- /bin/hello_world
env:
- name: RUNE_CARRIER
value: occlum
image: registry.cn-shanghai.aliyuncs.com/larus-test/hello-world:v2
imagePullPolicy: IfNotPresent
name: helloworld
workingDir: /run/rune
EOF
```
# containerd-shim-rune-v2
containerd-shim-rune-v2 is a shim for Inclavare Containers(runE).
## Introduction
![shim-rune](docs/images/shim-rune.png)
## Carrier Framework
Carrier is a abstract framework to build an enclave for the specified enclave runtime (Occlum、Graphene ..) .
![shim-carrier](docs/images/shim-carrier.png)
## Signature Framework
![shim-signature](docs/images/shim-signature.png)
## Build requirements
Go 1.14.x or above.
## How to build and install
### Step 1: Build and install shim binary.
```bash
mkdir -p $GOPATH/src/github.com/alibaba
cd $GOPATH/src/github.com/alibaba
git clone https://github.com/alibaba/inclavare-containers.git
cd shim
GOOS=linux make binaries
make install
ls -l /usr/local/bin/containerd-shim-rune-v2
```
### Step 2: Configuration
The Configuration file of Inclavare Containers MUST BE placed into `/etc/inclavare-containers/config.toml`
```toml
log_level = "debug" # "debug" "info" "warn" "error"
sgx_tool_sign = "/opt/intel/sgxsdk/bin/x64/sgx_sign"
[containerd]
socket = "/run/containerd/containerd.sock"
[enclave_runtime]
[enclave_runtime.occlum]
build_image = "docker.io/occlum/occlum:0.12.0-ubuntu18.04"
[enclave_runtime.graphene]
```
Modify containerd configuration file(/etc/containerd/config.toml) and add runtimes rune into it.
```toml
#...
[plugins.cri.containerd.runtimes.rune]
runtime_type = "io.containerd.rune.v2"
#...
```
Add RuntimeClass rune into your kubernetes cluster.
```bash
cat <<EOF | kubectl create -f -
apiVersion: node.k8s.io/v1beta1
kind: RuntimeClass
metadata:
name: rune
handler: rune
scheduling:
nodeSelector:
# Your rune worker labels.
#alibabacloud.com/container-runtime: rune
EOF
```
## Run HelloWorld in kubernetes
```bash
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
labels:
run: helloworld-in-tee
name: helloworld-in-tee
spec:
runtimeClassName: rune
containers:
- command:
- /bin/hello_world
env:
- name: RUNE_CARRIER
value: occlum
image: registry.cn-shanghai.aliyuncs.com/larus-test/hello-world:v2
imagePullPolicy: IfNotPresent
name: helloworld
workingDir: /run/rune
EOF
```
// +build linux
package main
import (
"github.com/containerd/containerd/runtime/v2/shim"
"github.com/alibaba/inclavare-containers/shim/runtime/v2/rune/v2"
)
func main() {
shim.Run("io.containerd.rune.v2", v2.New)
}
package options
import (
"errors"
"github.com/spf13/pflag"
"github.com/alibaba/inclavare-containers/shim/runtime/signature/server/conf"
)
type SignatureServerOptions struct {
PrivateKeyPath string
PublicKeyPath string
//CertificatePath string
}
func NewSignatureServerOptions() *SignatureServerOptions {
return &SignatureServerOptions{}
}
func (opts *SignatureServerOptions) Validate() []error {
if opts == nil {
return nil
}
var allErrors []error
if opts.PrivateKeyPath == "" {
allErrors = append(allErrors, errors.New("--private-key cannot be empty"))
}
if opts.PublicKeyPath == "" {
allErrors = append(allErrors, errors.New("--public-key cannot be empty"))
}
return allErrors
}
func (opts *SignatureServerOptions) AddFlags(fs *pflag.FlagSet) {
if opts == nil {
return
}
fs.StringVar(&opts.PrivateKeyPath, "private-key", "/etc/signature/pki/privatekey.pem", "private key path")
fs.StringVar(&opts.PublicKeyPath, "public-key", "/etc/signature/pki/publickey.pem", "public key path")
}
func (opts *SignatureServerOptions) ApplyTo(cfg *conf.Config) error {
if opts == nil {
return errors.New("ToolkitServerOptions is nil")
}
cfg.PrivateKeyPath = opts.PrivateKeyPath
cfg.PublicKeyPath = opts.PublicKeyPath
return nil
}
package app
import (
"github.com/golang/glog"
"github.com/alibaba/inclavare-containers/shim/cmd/signature-server/app/options"
"github.com/alibaba/inclavare-containers/shim/runtime/signature/server"
"github.com/alibaba/inclavare-containers/shim/runtime/signature/server/conf"
)
func runServer(opts *options.SignatureServerOptions, stopCh <-chan struct{}) error {
var err error
var cnf conf.Config
if err = opts.ApplyTo(&cnf); err != nil {
return err
}
svr, err := server.NewServer(&cnf)
if err != nil {
glog.Fatalf("failed to init toolkit server, err:%s", err.Error())
return err
}
svr.Start(stopCh)
return nil
}
package app
import (
"github.com/spf13/cobra"
utilerrors "k8s.io/apimachinery/pkg/util/errors"
"github.com/alibaba/inclavare-containers/shim/cmd/signature-server/app/options"
)
func NewSignatureServer(stopCh <-chan struct{}) *cobra.Command {
opts := options.NewSignatureServerOptions()
cmd := &cobra.Command{
Short: "Launch signature server",
Long: "Launch signature server",
RunE: func(cmd *cobra.Command, args []string) error {
errs := opts.Validate()
if err := utilerrors.NewAggregate(errs); err != nil {
return err
}
return runServer(opts, stopCh)
},
}
flags := cmd.Flags()
opts.AddFlags(flags)
return cmd
}
package main
import (
"flag"
"os"
"os/signal"
"runtime"
"syscall"
"github.com/golang/glog"
"github.com/alibaba/inclavare-containers/shim/cmd/signature-server/app"
)
var onlyOneSignalHandler = make(chan struct{})
var shutdownSignals = []os.Signal{os.Interrupt, syscall.SIGTERM}
func setupSignalHandler() <-chan struct{} {
close(onlyOneSignalHandler) // panics when called twice
stop := make(chan struct{})
c := make(chan os.Signal, 2)
signal.Notify(c, shutdownSignals...)
go func() {
<-c
close(stop)
<-c
os.Exit(1) // second signal. Exit directly.
}()
return stop
}
func main() {
//logs.InitLogs()
//defer logs.FlushLogs()
if len(os.Getenv("GOMAXPROCS")) == 0 {
runtime.GOMAXPROCS(runtime.NumCPU())
}
stopCh := setupSignalHandler()
cmd := app.NewSignatureServer(stopCh)
cmd.Flags().AddGoFlagSet(flag.CommandLine)
if err := cmd.Execute(); err != nil {
glog.Fatal(err)
}
flag.CommandLine.Parse([]string{})
if err := cmd.Execute(); err != nil {
glog.Fatal(err)
}
}
log_level = "debug" # "debug" "info" "warn" "error"
sgx_tool_sign = "/opt/intel/sgxsdk/bin/x64/sgx_sign"
[containerd]
socket = "/run/containerd/containerd.sock"
[enclave_runtime]
[enclave_runtime.occlum]
build_image = "docker.io/occlum/occlum:0.12.0-ubuntu18.04"
[enclave_runtime.graphene]
\ No newline at end of file
package config
type Containerd struct {
Socket string `toml:"socket"`
}
type Occlum struct {
BuildImage string `toml:"build_image"`
}
type Graphene struct {
}
type EnclaveRuntime struct {
Occlum Occlum `toml:"occlum"`
Graphene Graphene `toml:"graphene"`
}
type Config struct {
LogLevel string `toml:"log_level"`
SgxToolSign string `toml:"sgx_tool_sign"`
Containerd Containerd `toml:"containerd"`
EnclaveRuntime EnclaveRuntime `toml:"enclave_runtime"`
}
package config
import (
"fmt"
"testing"
"github.com/BurntSushi/toml"
)
func TestDecodeConfig(t *testing.T) {
text := `log_level = "debug" # "debug" "info" "warn" "error"
sgx_tool_sign = "/opt/intel/sgxsdk/bin/x64/sgx_sign"
[containerd]
socket = "/run/containerd/containerd.sock"
[enclave_runtime]
[enclave_runtime.occlum]
build_image = "docker.io/occlum/occlum:0.12.0-ubuntu18.04"
[enclave_runtime.graphene]
`
var cfg Config
if _, err := toml.Decode(text, &cfg); err != nil {
t.Fatal(err)
}
fmt.Printf("%#v", cfg)
}
module github.com/alibaba/inclavare-containers/shim
go 1.13
require (
github.com/BurntSushi/toml v0.3.1
github.com/Microsoft/hcsshim v0.8.7 // indirect
github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f
github.com/containerd/containerd v1.3.3
github.com/containerd/fifo v0.0.0-20191213151349-ff969a566b00 // indirect
github.com/containerd/go-runc v0.0.0-20200220073739-7016d3ce2328
github.com/containerd/ttrpc v1.0.0 // indirect
github.com/containerd/typeurl v1.0.0
github.com/docker/distribution v0.0.0-00010101000000-000000000000 // indirect
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
github.com/gin-gonic/gin v1.6.3
github.com/gogo/googleapis v1.4.0 // indirect
github.com/gogo/protobuf v1.3.1
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b
github.com/imdario/mergo v0.3.9 // indirect
github.com/opencontainers/go-digest v1.0.0-rc1 // indirect
github.com/opencontainers/image-spec v1.0.1 // indirect
github.com/opencontainers/runc v0.1.1 // indirect
github.com/opencontainers/runtime-spec v1.0.2
github.com/pkg/errors v0.9.1
github.com/sirupsen/logrus v1.5.0
github.com/spf13/cobra v1.0.0
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.4.0
go.etcd.io/bbolt v1.3.4 // indirect
golang.org/x/sys v0.0.0-20200331124033-c3d80250170d
google.golang.org/grpc v1.28.0 // indirect
gopkg.in/yaml.v2 v2.3.0 // indirect
k8s.io/apimachinery v0.18.2
)
replace github.com/docker/distribution => github.com/docker/distribution v2.7.1-0.20190205005809-0d3efadf0154+incompatible
此差异已折叠。
# Carrier
Carrier knows how to build a run-able SGX application container bundle with library OS without modification to application.
\ No newline at end of file
package constants
type EnclaveType string
const (
IntelSGX = EnclaveType("intelSgx")
)
const (
EnclaveTypeKeyName = "ENCLAVE_TYPE"
EnclaveRuntimePathKeyName = "ENCLAVE_RUNTIME_PATH"
EnclaveRuntimeArgsKeyName = "ENCLAVE_RUNTIME_ARGS"
DefaultEnclaveRuntimeArgs = ".occlum"
OcclumConfigPathKeyName = "OCCLUM_CONFIG_PATH"
)
const (
ReplaceOcclumImageScript = `#!/bin/bash
set -xe
function usage() {
if [ $# -lt 2 ]; then
echo "usage: $0 src_dir dst_dir"
exit 1
fi
}
function deep_copy_link() {
local link_symbol=$1
local link_target=$2
rm -f ${link_symbol}
mkdir -p ${link_symbol}
/bin/cp -rdf ${link_target}/* ${link_symbol} || true
}
function copy(){
local src_dir=$1
local dst_dir=$2
local dst_root_dir=$3
if [ ! -d ${dst_dir} ]; then
mkdir -p ${dst_dir}
fi
for file in $(ls ${src_dir}/)
do
src_file=${src_dir}/${file}
dst_file=${dst_dir}/${file}
if [ -f ${src_file} ]; then
rm -fr ${dst_file}
/bin/cp -df ${src_file} ${dst_file}
elif [ -d ${src_file} ]; then
link_target=$(stat -c "%N" ${dst_file} | awk '-F-> ' '{print $2}' | awk -F"'" '{print $2}')
if [ "${link_target}" != "" ]; then
reg='^/.*'
if [[ ${link_target} =~ ${reg} ]]; then
link_target=${dst_root_dir}${link_target}
else
link_target=${dst_file}/../${link_target}
fi
deep_copy_link "${dst_file}" "${link_target}"
fi
copy "${src_file}" "${dst_file}" "${dst_root_dir}"
fi
done
}
function compact() {
local src_dir=$1
local dst_dir=$2
backup_dir=/tmp/dst_backup
# step1: backup files in directory dst_dir
rm -fr ${backup_dir}
mkdir -p ${backup_dir}
/bin/cp -rdf ${dst_dir}/* ${backup_dir}/ || true
# step2: clean dirctory dst_dir
rm -rf ${dst_dir}/*
# step3: copy files in directory src_dir to directory dst_dir
/bin/cp -rdf ${src_dir}/* ${dst_dir}/ || true
# step4: restore backuped failes to directory ${dst_dir}
copy ${backup_dir} ${dst_dir} ${dst_dir}
# step5: remove backuped files
rm -rf ${backup_dir}
}
function start() {
usage $@
compact $@
}
start $@`
//FIXME
BuildOcclumEnclaveScript = `#!/bin/bash
set -xe
data_dir=/data
rootfs=/rootfs
work_dir=%s
entry_point=%s
occlum_config_path=${rootfs}/%s
occlum_workspace=${data_dir}/../occlum_workspace
rm -fr ${occlum_workspace}
mkdir -p ${occlum_workspace}
pushd ${occlum_workspace}
occlum init
if [[ "${occlum_config_path}" != "" && -f ${occlum_config_path} ]];then
/bin/cp -f ${occlum_config_path} Occlum.json
fi
sed -i "s#/bin#${entry_point}#g" Occlum.json
/bin/bash ${data_dir}/replace_occlum_image.sh ${rootfs} image
occlum build
rm -f ${rootfs}/${work_dir}/.occlum/build/lib/libocclum-libos.signed.so
mkdir -p ${rootfs}/${work_dir} || true
/bin/cp -fr .occlum ${rootfs}/${work_dir}
# ===fixme debug====
/bin/cp -fr image ${rootfs}/${work_dir}
/bin/cp -f Occlum.json ${rootfs}/${work_dir}
# ==================
/bin/cp -f Enclave.xml ${data_dir}
popd
pushd ${rootfs}/${work_dir}
# ==== copy sgxsdk libs =======
lib_dir=${rootfs}/lib
/bin/cp -f /usr/lib/x86_64-linux-gnu/libprotobuf.so ${lib_dir}
/bin/cp -f /lib/x86_64-linux-gnu/libseccomp.so.2 ${lib_dir}
/bin/cp -f /usr/lib/libsgx_u*.so* ${lib_dir}
/bin/cp -f /usr/lib/libsgx_enclave_common.so.1 ${lib_dir}
/bin/cp -f /usr/lib/libsgx_launch.so.1 ${lib_dir}
# ==================
ln -sfn .occlum/build/lib/libocclum-pal.so liberpal-occlum.so
chroot ${rootfs} /sbin/ldconfig
popd
`
)
package empty
import (
"github.com/alibaba/inclavare-containers/shim/runtime/carrier"
"github.com/containerd/containerd/runtime/v2/task"
)
var _ carrier.Carrier = &empty{}
type empty struct{}
func NewEmptyCarrier() (carrier.Carrier, error) {
return &empty{}, nil
}
// Name impl Carrier.
func (c *empty) Name() string {
return "empty"
}
// BuildUnsignedEnclave impl Carrier.
func (c *empty) BuildUnsignedEnclave(req *task.CreateTaskRequest, args *carrier.BuildUnsignedEnclaveArgs) (
unsignedEnclave string, err error) {
return "", nil
}
// GenerateSigningMaterial impl Carrier.
func (c *empty) GenerateSigningMaterial(req *task.CreateTaskRequest, args *carrier.CommonArgs) (
signingMaterial string, err error) {
return "", nil
}
// CascadeEnclaveSignature impl Carrier.
func (c *empty) CascadeEnclaveSignature(req *task.CreateTaskRequest, args *carrier.CascadeEnclaveSignatureArgs) (
signedEnclave string, err error) {
return "", nil
}
// Cleanup impl Carrier.
func (c *empty) Cleanup() error {
return nil
}
package graphene
import (
"errors"
"github.com/containerd/containerd/runtime/v2/task"
"github.com/alibaba/inclavare-containers/shim/runtime/carrier"
)
var _ carrier.Carrier = &graphene{}
type graphene struct {
//TODO
}
func NewGrapheneCarrier() (carrier.Carrier, error) {
//TODO
return nil, errors.New("Carrier graphene has not been implemented")
}
// Name impl Carrier.
func (c *graphene) Name() string {
return "graphene"
}
// BuildUnsignedEnclave impl Carrier.
func (c *graphene) BuildUnsignedEnclave(req *task.CreateTaskRequest, args *carrier.BuildUnsignedEnclaveArgs) (
unsignedEnclave string, err error) {
//TODO
return "", errors.New("graphene BuildUnsignedEnclave unimplemented")
}
// GenerateSigningMaterial impl Carrier.
func (c *graphene) GenerateSigningMaterial(req *task.CreateTaskRequest, args *carrier.CommonArgs) (
signingMaterial string, err error) {
//TODO
return "", errors.New("graphene GenerateSigningMaterial unimplemented")
}
// CascadeEnclaveSignature impl Carrier.
func (c *graphene) CascadeEnclaveSignature(req *task.CreateTaskRequest, args *carrier.CascadeEnclaveSignatureArgs) (
signedEnclave string, err error) {
//TODO
return "", errors.New("graphene CascadeEnclaveSignature unimplemented")
}
// Cleanup impl Carrier.
func (c *graphene) Cleanup() error {
//TODO
return errors.New("graphene Cleanup unimplemented")
}
package carrier
import "github.com/containerd/containerd/runtime/v2/task"
type BuildUnsignedEnclaveArgs struct {
// Bundle is the directory of unpacked container image.
Bundle string
}
type CommonArgs struct {
// Enclave is the enclave file to be signed.
Enclave string
// Key is the public key.
// For SignGenData args, a optional key specifies the public key of payload.
// For SignCatSig args, a required key specifies the public key of the enclave signing key.
Key string
// Config is the the configuration for the enclave.
Config string
}
type CascadeEnclaveSignatureArgs struct {
CommonArgs
// SigningMaterial the enclave signing material generated by "SignGenData()".
SigningMaterial string
//Signature is the signature file for the enclave signing material.
Signature string
}
// Carrier is a factory that leverages libOS to build a TEE for native container applications.
type Carrier interface {
// Name returns the name of carrier.
Name() string
// BuildUnsignedEnclave builds a unsigned libOS enclave for application.
BuildUnsignedEnclave(req *task.CreateTaskRequest, args *BuildUnsignedEnclaveArgs) (unsignedEnclave string, err error)
// GenerateSigningMaterial generates enclave signing material to be signed.
GenerateSigningMaterial(req *task.CreateTaskRequest, args *CommonArgs) (signingMaterial string, err error)
// CascadeEnclaveSignature generates the signed enclave with the input signature file, the public key and
// the enclave signing material.
CascadeEnclaveSignature(req *task.CreateTaskRequest, args *CascadeEnclaveSignatureArgs) (signedEnclave string, err error)
// Cleanup cleans all files and directories generated by carrier.
Cleanup() error
}
package occlum
import (
"context"
"encoding/hex"
"fmt"
"io/ioutil"
"math/rand"
"os"
"os/exec"
"path/filepath"
"strconv"
"time"
"github.com/containerd/containerd"
"github.com/containerd/containerd/cio"
"github.com/containerd/containerd/namespaces"
"github.com/containerd/containerd/oci"
"github.com/containerd/containerd/runtime/v2/task"
"github.com/opencontainers/runtime-spec/specs-go"
"github.com/sirupsen/logrus"
"github.com/BurntSushi/toml"
"github.com/alibaba/inclavare-containers/shim/runtime/config"
shim_config "github.com/alibaba/inclavare-containers/shim/config"
"github.com/alibaba/inclavare-containers/shim/runtime/carrier"
"github.com/alibaba/inclavare-containers/shim/runtime/v2/rune/constants"
carr_const "github.com/alibaba/inclavare-containers/shim/runtime/carrier/constants"
)
const (
defaultNamespace = "default"
//occlumEnclaveBuilderImage = "docker.io/occlum/occlum:0.12.0-ubuntu18.04"
buildOcclumEnclaveFileName = "build_occulum_enclave.sh"
replaceOcclumImageScript = "replace_occlum_image.sh"
//containerdAddress = "/run/containerd/containerd.sock"
rootfsDirName = "rootfs"
encalveDataDir = "data"
//sgxToolSign = "/opt/intel/sgxsdk/bin/x64/sgx_sign"
)
var _ carrier.Carrier = &occlum{}
type occlum struct {
context context.Context
bundle string
workDirectory string
entryPoints []string
configPath string
spec *specs.Spec
shimConfig *shim_config.Config
}
// NewOcclumCarrier returns an carrier instance of occlum.
func NewOcclumCarrier(ctx context.Context, bundle string) (carrier.Carrier, error) {
var cfg shim_config.Config
if _, err := toml.DecodeFile(constants.ConfigurationPath, &cfg); err != nil {
return nil, err
}
setLogLevel(cfg.LogLevel)
return &occlum{
context: ctx,
bundle: bundle,
shimConfig: &cfg,
}, nil
}
// Name impl Carrier.
func (c *occlum) Name() string {
return "occlum"
}
// BuildUnsignedEnclave impl Carrier.
func (c *occlum) BuildUnsignedEnclave(req *task.CreateTaskRequest, args *carrier.BuildUnsignedEnclaveArgs) (
unsignedEnclave string, err error) {
// Initialize environment variables for occlum in config.json
if err := c.initBundleConfig(); err != nil {
return "", err
}
namespace, ok := namespaces.Namespace(c.context)
logrus.Debugf("BuildUnsignedEnclave: get namespace %s, containerdAddress: %s",
namespace, c.shimConfig.Containerd.Socket)
if !ok {
namespace = defaultNamespace
}
// Create a new client connected to the default socket path for containerd.
client, err := containerd.New(c.shimConfig.Containerd.Socket)
if err != nil {
return "", fmt.Errorf("failed to create containerd client. error: %++v", err)
}
defer client.Close()
logrus.Debugf("BuildUnsignedEnclave: get containerd client successfully")
// Create a new context with "k8s.io" namespace
ctx, cancle := context.WithTimeout(context.Background(), time.Minute*10)
defer cancle()
ctx = namespaces.WithNamespace(c.context, namespace)
if err = createNamespaceIfNotExist(client, namespace); err != nil {
return "", err
}
// pull the image to be used to build enclave.
occlumEnclaveBuilderImage := c.shimConfig.EnclaveRuntime.Occlum.BuildImage
image, err := client.Pull(ctx, occlumEnclaveBuilderImage, containerd.WithPullUnpack)
if err != nil {
return "", fmt.Errorf("failed to pull image %s. error: %++v", occlumEnclaveBuilderImage, err)
}
logrus.Debugf("BuildUnsignedEnclave: pull image %s successfully", occlumEnclaveBuilderImage)
// Generate the containerID.
rand.Seed(time.Now().UnixNano())
containerId := fmt.Sprintf("occlum-enclave-builder-%s", strconv.FormatInt(rand.Int63(), 16))
snapshotId := fmt.Sprintf("occlum-enclave-builder-snapshot-%s", strconv.FormatInt(rand.Int63(), 16))
logrus.Debugf("BuildUnsignedEnclave: containerId: %s, snapshotId: %s", containerId, snapshotId)
if err := os.Mkdir(filepath.Join(req.Bundle, encalveDataDir), 0755); err != nil {
return "", err
}
// Create a shell script which is used to build occlum enclave.
buildEnclaveScript := filepath.Join(req.Bundle, encalveDataDir, buildOcclumEnclaveFileName)
if err := ioutil.WriteFile(buildEnclaveScript, []byte(fmt.Sprintf(carr_const.BuildOcclumEnclaveScript,
c.workDirectory, c.entryPoints[0], c.configPath)), os.ModePerm); err != nil {
return "", err
}
replaceImagesScript := filepath.Join(req.Bundle, encalveDataDir, replaceOcclumImageScript)
if err := ioutil.WriteFile(replaceImagesScript, []byte(carr_const.ReplaceOcclumImageScript), os.ModePerm); err != nil {
return "", err
}
// Create rootfs mount points.
mounts := make([]specs.Mount, 0)
rootfsMount := specs.Mount{
Destination: filepath.Join("/", rootfsDirName),
Type: "bind",
Source: filepath.Join(req.Bundle, rootfsDirName),
Options: []string{"rbind", "rw"},
}
dataMount := specs.Mount{
Destination: filepath.Join("/", encalveDataDir),
Type: "bind",
Source: filepath.Join(req.Bundle, encalveDataDir),
Options: []string{"rbind", "rw"},
}
logrus.Debugf("BuildUnsignedEnclave: rootfsMount source: %s, destination: %s",
rootfsMount.Source, rootfsMount.Destination)
mounts = append(mounts, rootfsMount, dataMount)
// create a container
container, err := client.NewContainer(
ctx,
containerId,
containerd.WithImage(image),
containerd.WithNewSnapshot(snapshotId, image),
containerd.WithNewSpec(oci.WithImageConfig(image),
oci.WithProcessArgs("/bin/bash", filepath.Join("/", encalveDataDir, buildOcclumEnclaveFileName)),
//FIXME debug
//oci.WithProcessArgs("sleep", "infinity"),
oci.WithPrivileged,
oci.WithMounts(mounts),
),
)
if err != nil {
return "", fmt.Errorf("failed to create container by image %s. error: %++v",
occlumEnclaveBuilderImage, err)
}
defer container.Delete(ctx, containerd.WithSnapshotCleanup)
// Create a task from the container.
task, err := container.NewTask(ctx, cio.NewCreator(cio.WithStdio))
if err != nil {
return "", err
}
defer task.Delete(ctx)
logrus.Debugf("BuildUnsignedEnclave: create task successfully")
// Wait before calling start
exitStatusC, err := task.Wait(ctx)
if err != nil {
return "", err
}
// Call start() on the task to execute the building scripts.
if err := task.Start(ctx); err != nil {
return "", err
}
// Wait for the process to fully exit and print out the exit status
status := <-exitStatusC
code, _, err := status.Result()
if err != nil {
return "", fmt.Errorf("container exited abnormaly with exit code %d. error: %++v", code, err)
} else if code != 0 {
return "", fmt.Errorf("container exited abnormaly with exit code %d", code)
}
enclavePath := filepath.Join(req.Bundle, rootfsDirName, c.workDirectory, ".occlum/build/lib/libocclum-libos.so")
logrus.Debugf("BuildUnsignedEnclave: exit code: %d. enclavePath: %s", code, enclavePath)
return enclavePath, nil
}
// GenerateSigningMaterial impl Carrier.
func (c *occlum) GenerateSigningMaterial(req *task.CreateTaskRequest, args *carrier.CommonArgs) (
signingMaterial string, err error) {
signingMaterial = filepath.Join(req.Bundle, encalveDataDir, "enclave_sig.dat")
args.Config = filepath.Join(req.Bundle, encalveDataDir, "Enclave.xml")
sgxToolSign := c.shimConfig.SgxToolSign
logrus.Debugf("GenerateSigningMaterial cmmmand: %s gendata -enclave %s -config %s -out %s",
sgxToolSign, args.Enclave, args.Config, signingMaterial)
gendataArgs := []string{
"gendata",
"-enclave",
args.Enclave,
"-config",
args.Config,
"-out", signingMaterial,
}
cmd := exec.Command(sgxToolSign, gendataArgs...)
if result, err := cmd.Output(); err != nil {
return "", fmt.Errorf("GenerateSigningMaterial: sgx_sign gendata failed. error: %v %s", err, string(result))
}
return signingMaterial, nil
}
// CascadeEnclaveSignature impl Carrier.
func (c *occlum) CascadeEnclaveSignature(req *task.CreateTaskRequest, args *carrier.CascadeEnclaveSignatureArgs) (
signedEnclave string, err error) {
signedEnclave = filepath.Join(
req.Bundle,
rootfsDirName,
c.workDirectory,
".occlum/build/lib/libocclum-libos.signed.so")
sgxToolSign := c.shimConfig.SgxToolSign
logrus.Debugf("CascadeEnclaveSignature cmmmand: %s catsig -enclave %s -config %s -out %s -key %s -sig %s -unsigned %s",
sgxToolSign, args.Enclave, args.Config, signedEnclave, args.Key, args.Signature, args.SigningMaterial)
catsigArgs := []string{
"catsig",
"-enclave",
args.Enclave,
"-config",
args.Config,
"-out",
signedEnclave,
"-key",
args.Key,
"-sig",
args.Signature,
"-unsigned",
args.SigningMaterial,
}
cmd := exec.Command(sgxToolSign, catsigArgs...)
if result, err := cmd.Output(); err != nil {
return "", fmt.Errorf("CascadeEnclaveSignature: sgx_sign catsig failed. error: %v %s", err, string(result))
}
return signedEnclave, nil
}
// Cleanup impl Carrier.
func (c *occlum) Cleanup() error {
//TODO
return nil
}
func (c *occlum) initBundleConfig() error {
configPath := filepath.Join(c.bundle, "config.json")
spec, err := config.LoadSpec(configPath)
if err != nil {
return err
}
c.workDirectory = spec.Process.Cwd
c.entryPoints = spec.Process.Args
enclaveRuntimePath := fmt.Sprintf("%s/liberpal-occlum.so", c.workDirectory)
envs := map[string]string{
carr_const.EnclaveRuntimePathKeyName: enclaveRuntimePath,
carr_const.EnclaveTypeKeyName: string(carr_const.IntelSGX),
carr_const.EnclaveRuntimeArgsKeyName: carr_const.DefaultEnclaveRuntimeArgs,
}
if occlumConfigPath, ok := config.GetEnv(spec, carr_const.OcclumConfigPathKeyName); ok {
c.configPath = occlumConfigPath
}
c.spec = spec
if err := config.UpdateEnvs(spec, envs, false); err != nil {
return err
}
return config.SaveSpec(configPath, spec)
}
func createNamespaceIfNotExist(client *containerd.Client, namespace string) error {
svc := client.NamespaceService()
ctx := context.Background()
ctx, cancel := context.WithTimeout(ctx, time.Second*60)
defer cancel()
nses, err := svc.List(ctx)
if err != nil {
return err
}
for _, ns := range nses {
if ns == namespace {
return nil
}
}
return svc.Create(ctx, namespace, nil)
}
// generateID generates a random unique id.
func generateID() string {
b := make([]byte, 32)
rand.Read(b)
return hex.EncodeToString(b)
}
func setLogLevel(level string) {
switch level {
case "debug":
logrus.SetLevel(logrus.DebugLevel)
case "info":
logrus.SetLevel(logrus.InfoLevel)
case "warn":
logrus.SetLevel(logrus.WarnLevel)
case "error":
logrus.SetLevel(logrus.ErrorLevel)
default:
logrus.SetLevel(logrus.InfoLevel)
}
}
package config
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"strings"
specs "github.com/opencontainers/runtime-spec/specs-go"
)
const (
defaultEnclaveType = "intelSgx"
envSeparator = "="
)
// LoadSpec loads the specification from the provided path.
func LoadSpec(cPath string) (spec *specs.Spec, err error) {
cf, err := os.Open(cPath)
if err != nil {
return nil, err
}
defer cf.Close()
if err = json.NewDecoder(cf).Decode(&spec); err != nil {
return nil, err
}
_, err = json.Marshal(spec)
if err != nil {
return nil, err
}
return spec, nil
}
func SaveSpec(cPath string, spec *specs.Spec) error {
data, err := json.Marshal(spec)
if err != nil {
return err
}
return ioutil.WriteFile(cPath, data, 0644)
}
func UpdateEnvs(spec *specs.Spec, kvs map[string]string, overwrite bool) error {
if spec.Process == nil || kvs == nil || len(kvs) <= 0 {
return nil
}
all := make(map[string]string)
for _, env := range spec.Process.Env {
p := strings.SplitN(env, envSeparator, 2)
if len(p) != 2 {
continue
}
all[p[0]] = p[1]
}
for k, v := range kvs {
if overwrite {
all[k] = v
} else if _, ok := all[k]; !ok {
all[k] = v
}
}
envs := make([]string, 0)
for k, v := range all {
envs = append(envs, fmt.Sprintf("%s%s%s", k, envSeparator, v))
}
spec.Process.Env = envs
return nil
}
func GetEnv(spec *specs.Spec, key string) (string, bool) {
if spec.Process == nil {
return "", false
}
for _, env := range spec.Process.Env {
p := strings.SplitN(env, envSeparator, 2)
if len(p) != 2 {
continue
}
if p[0] != key {
continue
}
return p[1], true
}
return "", true
}
func UpdateEnclaveEnvConfig(cPath string) error {
spec, err := LoadSpec(cPath)
if err != nil {
return err
}
var name string = "ENCLAVE_TYPE"
m := map[string]string{name: defaultEnclaveType}
if err := UpdateEnvs(spec, m, false); err != nil {
return err
}
if err := SaveSpec(cPath, spec); err != nil {
return err
}
return nil
}
package config
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
)
const configTemplate = `{"ociVersion":"1.0.1-dev","process":{"user":{"uid":0,"gid":0},"args":["/pause"],"env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","ENCLAVE_TYPE=intelSgx"],"cwd":"/","capabilities":{"bounding":["CAP_CHOWN","CAP_DAC_OVERRIDE","CAP_FSETID","CAP_FOWNER","CAP_MKNOD","CAP_NET_RAW","CAP_SETGID","CAP_SETUID","CAP_SETFCAP","CAP_SETPCAP","CAP_NET_BIND_SERVICE","CAP_SYS_CHROOT","CAP_KILL","CAP_AUDIT_WRITE"],"effective":["CAP_CHOWN","CAP_DAC_OVERRIDE","CAP_FSETID","CAP_FOWNER","CAP_MKNOD","CAP_NET_RAW","CAP_SETGID","CAP_SETUID","CAP_SETFCAP","CAP_SETPCAP","CAP_NET_BIND_SERVICE","CAP_SYS_CHROOT","CAP_KILL","CAP_AUDIT_WRITE"],"inheritable":["CAP_CHOWN","CAP_DAC_OVERRIDE","CAP_FSETID","CAP_FOWNER","CAP_MKNOD","CAP_NET_RAW","CAP_SETGID","CAP_SETUID","CAP_SETFCAP","CAP_SETPCAP","CAP_NET_BIND_SERVICE","CAP_SYS_CHROOT","CAP_KILL","CAP_AUDIT_WRITE"],"permitted":["CAP_CHOWN","CAP_DAC_OVERRIDE","CAP_FSETID","CAP_FOWNER","CAP_MKNOD","CAP_NET_RAW","CAP_SETGID","CAP_SETUID","CAP_SETFCAP","CAP_SETPCAP","CAP_NET_BIND_SERVICE","CAP_SYS_CHROOT","CAP_KILL","CAP_AUDIT_WRITE"]},"noNewPrivileges":true,"oomScoreAdj":-998},"root":{"path":"rootfs","readonly":true},"mounts":[{"destination":"/proc","type":"proc","source":"proc","options":["nosuid","noexec","nodev"]},{"destination":"/dev","type":"tmpfs","source":"tmpfs","options":["nosuid","strictatime","mode=755","size=65536k"]},{"destination":"/dev/pts","type":"devpts","source":"devpts","options":["nosuid","noexec","newinstance","ptmxmode=0666","mode=0620","gid=5"]},{"destination":"/dev/shm","type":"tmpfs","source":"shm","options":["nosuid","noexec","nodev","mode=1777","size=65536k"]},{"destination":"/dev/mqueue","type":"mqueue","source":"mqueue","options":["nosuid","noexec","nodev"]},{"destination":"/sys","type":"sysfs","source":"sysfs","options":["nosuid","noexec","nodev","ro"]},{"destination":"/dev/shm","type":"bind","source":"/run/containerd/io.containerd.grpc.v1.cri/sandboxes/8e5f48047dfc52c9ee043129580d5df9b70f6c0828d96fbb396fb269e114fbfd/shm","options":["rbind","ro"]}],"annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8e5f48047dfc52c9ee043129580d5df9b70f6c0828d96fbb396fb269e114fbfd","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/default_curl-test_3feed600-56cb-4a73-857f-87b27bb65771"},"linux":{"resources":{"devices":[{"allow":false,"access":"rwm"}],"cpu":{"shares":2}},"cgroupsPath":"kubepods-besteffort-pod3feed600_56cb_4a73_857f_87b27bb65771.slice:cri-containerd:8e5f48047dfc52c9ee043129580d5df9b70f6c0828d96fbb396fb269e114fbfd","namespaces":[{"type":"pid"},{"type":"ipc"},{"type":"mount"}],"maskedPaths":["/proc/acpi","/proc/asound","/proc/kcore","/proc/keys","/proc/latency_stats","/proc/timer_list","/proc/timer_stats","/proc/sched_debug","/sys/firmware","/proc/scsi"],"readonlyPaths":["/proc/bus","/proc/fs","/proc/irq","/proc/sys","/proc/sysrq-trigger"]}}`
func TestUpdateEnvs(t *testing.T) {
path := filepath.Join("/tmp", "config.json")
defer os.Remove(path)
err := ioutil.WriteFile(path, []byte(configTemplate), 0644)
assert.Nil(t, err)
spec, err := LoadSpec(path)
assert.Nil(t, err)
m := map[string]string{"k1": "v1", "k2": "v2"}
err = UpdateEnvs(spec, m, false)
assert.Nil(t, err)
v1, ok := GetEnv(spec, "k1")
assert.Equal(t, true, ok)
assert.Equal(t, "v1", v1)
v2, ok := GetEnv(spec, "k2")
assert.Equal(t, true, ok)
assert.Equal(t, "v2", v2)
v3, ok := GetEnv(spec, "ENCLAVE_TYPE")
assert.Equal(t, true, ok)
assert.Equal(t, "intelSgx", v3)
err = SaveSpec(path, spec)
assert.Nil(t, err)
spec, err = LoadSpec(path)
v1, ok = GetEnv(spec, "k1")
assert.Equal(t, true, ok)
assert.Equal(t, "v1", v1)
v2, ok = GetEnv(spec, "k2")
assert.Equal(t, true, ok)
assert.Equal(t, "v2", v2)
v3, ok = GetEnv(spec, "ENCLAVE_TYPE")
assert.Equal(t, true, ok)
assert.Equal(t, "intelSgx", v3)
fmt.Printf("spec=%++v", spec.Process.Env)
}
# Signature
\ No newline at end of file
package client
import (
"bytes"
"crypto/tls"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"strings"
"github.com/alibaba/inclavare-containers/shim/runtime/signature/types"
"github.com/golang/glog"
)
type SignStandard string
const (
PKCS1 SignStandard = "pkcs1"
)
// Client
type Client interface {
Sign(data []byte) (signature []byte, publicKey []byte, err error)
GetStandard() SignStandard
}
//var _ Client = &pkcs1Client{}
type pkcs1Client struct {
internalClient *http.Client
serviceBaseURL *url.URL
standard SignStandard
}
func NewClient(standard SignStandard, serviceBaseURL *url.URL) Client {
switch standard {
case PKCS1:
return &pkcs1Client{
serviceBaseURL: serviceBaseURL,
standard: PKCS1,
}
default:
return &pkcs1Client{
serviceBaseURL: serviceBaseURL,
standard: PKCS1,
}
}
}
func (c *pkcs1Client) init() {
c.internalClient = &http.Client{
Transport: &http.Transport{
//TODO: verify server
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
},
}
}
func (c *pkcs1Client) Sign(data []byte) (signature []byte, publicKey []byte, err error) {
if c.internalClient == nil {
c.init()
}
var url string
if strings.HasSuffix(c.serviceBaseURL.String(), "/") {
url = fmt.Sprintf("%s%s", c.serviceBaseURL.String(), string(c.standard))
} else {
url = fmt.Sprintf("%s/%s", c.serviceBaseURL.String(), string(c.standard))
}
req, err := http.NewRequest("POST", url, bytes.NewBuffer(data))
if err != nil {
glog.Errorf("failed to new sign request, %v", err)
return nil, nil, err
}
req.Header.Set("Content-Type", "text/plain")
resp, err := c.internalClient.Do(req)
if err != nil || resp.StatusCode != 200 {
glog.Errorf("request sign error,%v", err)
return nil, nil, err
}
defer resp.Body.Close()
signedBytes, err := ioutil.ReadAll(resp.Body)
if err != nil {
glog.Errorf("failed to read sign response,%v", err)
return nil, nil, err
}
payload := &types.SignaturePayload{}
if err := json.Unmarshal(signedBytes, payload); err != nil {
glog.Errorf("failed to unmarshal sign response,%v", err)
return nil, nil, err
}
return []byte(payload.Signature), []byte(payload.PublicKey), nil
}
func (c *pkcs1Client) GetStandard() SignStandard {
return PKCS1
}
package client
import (
"net/url"
"testing"
"github.com/stretchr/testify/assert"
)
func Test_pkcs1Client_Sign(t *testing.T) {
baseUrl, err := url.Parse("https://47.102.121.174:8443/api/v1/signature")
assert.Nil(t, err)
client := NewClient(PKCS1, baseUrl)
signature, publicKey, err := client.Sign([]byte("Hello"))
assert.Nil(t, err)
t.Logf("%s\n%s\n", string(signature), string(publicKey))
}
package client
type Signature interface {
Sign() error
GetCertificate() (string, error)
}
package api
import (
"net/http"
"os"
"github.com/gin-gonic/gin"
)
func (s *ApiServer) installRoutes() {
loggerHandleFunc := s.middlewareLoggerWithWriter(os.Stdout)
r := s.router
r.HEAD("/", func(_ *gin.Context) {})
s.installHealthz()
{
g := r.Group("/api/v1/signature")
g.Use(loggerHandleFunc)
{
g.POST("/pkcs1", s.pkcs1Handler)
}
}
}
func (s ApiServer) installHealthz() {
r := s.router
r.GET("/ping", func(c *gin.Context) { c.String(http.StatusOK, "pong") })
r.GET("/healthz", func(c *gin.Context) { c.String(http.StatusOK, "ok") })
}
package api
import (
"crypto"
"crypto/rand"
"crypto/rsa"
"crypto/sha256"
"crypto/x509"
"encoding/pem"
"io/ioutil"
"net/http"
"github.com/alibaba/inclavare-containers/shim/runtime/signature/types"
"github.com/golang/glog"
"github.com/gin-gonic/gin"
)
var rng = rand.Reader
func (s *ApiServer) pkcs1Handler(c *gin.Context) {
payload := &types.SignaturePayload{}
body, err := ioutil.ReadAll(c.Request.Body)
if err != nil {
glog.Errorf("failed to parse request body, err:%v", err.Error())
c.AbortWithStatus(http.StatusBadRequest)
return
}
hashed := sha256.Sum256(body)
signedBytes, err := rsa.SignPKCS1v15(rng, s.privateKey, crypto.SHA256, hashed[:])
if err != nil {
glog.Errorf("failed to sign request, err:%v", err.Error())
c.AbortWithStatus(http.StatusInternalServerError)
return
}
payload.Signature = string(signedBytes)
payload.PublicKey = string(pem.EncodeToMemory(&pem.Block{
Type: "RSA PUBLIC KEY",
Bytes: x509.MarshalPKCS1PublicKey(s.publicKey),
}))
c.JSON(http.StatusOK, payload)
}
package api
import (
"fmt"
"io"
"strings"
"time"
"github.com/gin-gonic/gin"
)
var (
green = string([]byte{27, 91, 57, 55, 59, 52, 50, 109})
white = string([]byte{27, 91, 57, 48, 59, 52, 55, 109})
yellow = string([]byte{27, 91, 57, 55, 59, 52, 51, 109})
red = string([]byte{27, 91, 57, 55, 59, 52, 49, 109})
blue = string([]byte{27, 91, 57, 55, 59, 52, 52, 109})
magenta = string([]byte{27, 91, 57, 55, 59, 52, 53, 109})
cyan = string([]byte{27, 91, 57, 55, 59, 52, 54, 109})
reset = string([]byte{27, 91, 48, 109})
)
func (s *ApiServer) middlewareLoggerWithWriter(out io.Writer) gin.HandlerFunc {
return func(c *gin.Context) {
// Start timer
start := time.Now()
path := c.Request.URL.Path
// Process request
c.Next()
username := ""
if username_i, _ := c.Get("username"); username_i != nil {
username = username_i.(string)
}
end := time.Now()
// latency in seconds
latency := end.Sub(start)
clientIP := c.ClientIP()
method := c.Request.Method
statusCode := c.Writer.Status()
_, level := colorForStatus(statusCode)
comment := c.Errors.ByType(gin.ErrorTypePrivate).String()
var access_sys_tag []string
if ss, ok := c.Get("access-system"); ok {
access_sys_tag = append(access_sys_tag, ss.(string))
}
access_sys_tag_str := strings.Join(access_sys_tag, " ")
// logtime client_ip server_ip domain level method http_code url response_time user url_query msg
fmt.Fprintf(out, "%s %s %s %s %s %s %d %s %.3f %s %s %s `%s`\n",
end.Format("02/Jan/2006:15:04:05"),
clientIP,
"", //TODO: fix me, nodeIP
c.Request.Host,
level,
method,
statusCode,
path,
latency.Seconds(),
username,
access_sys_tag_str,
c.Request.URL.Query().Encode(),
comment,
)
}
}
func colorForStatus(code int) (string, string) {
switch {
case code >= 200 && code < 300:
return green, "INFO"
case code >= 300 && code < 400:
return white, "INFO"
case code >= 400 && code < 500:
return yellow, "WARN"
default:
return red, "ERROR"
}
}
func colorForMethod(method string) string {
switch method {
case "GET":
return blue
case "POST":
return cyan
case "PUT":
return yellow
case "DELETE":
return red
case "PATCH":
return green
case "HEAD":
return magenta
case "OPTIONS":
return white
default:
return reset
}
}
package api
import (
"crypto/rsa"
"crypto/x509"
"github.com/alibaba/inclavare-containers/shim/runtime/signature/server/conf"
"github.com/alibaba/inclavare-containers/shim/runtime/signature/server/util"
"github.com/gin-gonic/gin"
)
type ApiServer struct {
router *gin.Engine
listenAddr string
privateKey *rsa.PrivateKey
publicKey *rsa.PublicKey
certificate *x509.Certificate
}
func NewApiServer(listenAddr string, conf *conf.Config) (*ApiServer, error) {
privateKey, err := util.ParseRsaPrivateKey(conf.PrivateKeyPath)
if err != nil {
return nil, err
}
publicKey, err := util.ParseRsaPublicKey(conf.PublicKeyPath)
if err != nil {
return nil, err
}
s := &ApiServer{
router: gin.Default(),
listenAddr: listenAddr,
privateKey: privateKey,
publicKey: publicKey,
}
s.installRoutes()
return s, nil
}
func (s *ApiServer) RunForeground() error {
return s.router.Run(s.listenAddr)
}
package conf
type Config struct {
PrivateKeyPath string
PublicKeyPath string
}
package server
import (
"os"
"os/signal"
"syscall"
"github.com/alibaba/inclavare-containers/shim/runtime/signature/server/conf"
"github.com/alibaba/inclavare-containers/shim/runtime/signature/server/api"
"github.com/golang/glog"
)
type Server struct {
config *conf.Config
sigCh chan os.Signal
apiServer *api.ApiServer
}
func NewServer(conf *conf.Config) (*Server, error) {
apiSvr, err := api.NewApiServer(":9080", conf)
if err != nil {
glog.Errorf("Failed to create ApiServer.err:%s", err.Error())
}
svr := &Server{
config: conf,
sigCh: make(chan os.Signal, 1),
apiServer: apiSvr,
}
// signal trap
signal.Notify(svr.sigCh, syscall.SIGINT)
return svr, nil
}
func (svr *Server) Start(stopChan <-chan struct{}) {
glog.Info("Starting HttpServer ...")
go func() {
if err := svr.apiServer.RunForeground(); err != nil {
panic(err)
}
}()
<-stopChan
}
package util
import (
"crypto/rsa"
"crypto/x509"
"encoding/pem"
"errors"
"io/ioutil"
)
func ParseRsaPrivateKey(file string) (*rsa.PrivateKey, error) {
priByte, err := ioutil.ReadFile(file)
if err != nil {
return nil, err
}
b, _ := pem.Decode(priByte)
if b == nil {
return nil, errors.New("error decoding private key")
}
priKey, err := x509.ParsePKCS1PrivateKey(b.Bytes)
if err != nil {
return nil, err
}
return priKey, nil
}
func ParseX509Certificate(file string) (*x509.Certificate, error) {
cerBytes, err := ioutil.ReadFile(file)
if err != nil {
return nil, err
}
cer, err := x509.ParseCertificate(cerBytes)
if err != nil {
return nil, errors.New("error parsing certificate")
}
return cer, nil
}
func ParseRsaPublicKey(file string) (*rsa.PublicKey, error) {
pubByte, err := ioutil.ReadFile(file)
if err != nil {
return nil, err
}
b, _ := pem.Decode(pubByte)
if b == nil {
return nil, errors.New("error decoding public key")
}
pubKey, err := x509.ParsePKIXPublicKey(b.Bytes)
if err != nil {
return nil, err
}
return pubKey.(*rsa.PublicKey), nil
}
package types
type SignaturePayload struct {
Signature string `json:"signature"`
PublicKey string `json:"publicKey"`
}
package constants
const (
ConfigurationPath = "/etc/inclavare-containers/config.toml"
RuneOCIRuntime = "rune"
EnvKeyRuneCarrier = "RUNE_CARRIER"
RuneDefaultWorkDirectory = "/run/rune"
)
package rune
import (
"errors"
)
type CarrierKind string
const (
Empty CarrierKind = ""
Occlum CarrierKind = "occlum"
Graphene CarrierKind = "graphene"
)
var ErrorUnknownCarrier = errors.New("unknown carrier")
package v2
import (
"io/ioutil"
"net/url"
"os"
"os/exec"
"path"
"path/filepath"
"github.com/alibaba/inclavare-containers/shim/runtime/config"
"github.com/alibaba/inclavare-containers/shim/runtime/carrier"
emptycarrier "github.com/alibaba/inclavare-containers/shim/runtime/carrier/empty"
"github.com/alibaba/inclavare-containers/shim/runtime/carrier/graphene"
"github.com/alibaba/inclavare-containers/shim/runtime/carrier/occlum"
signclient "github.com/alibaba/inclavare-containers/shim/runtime/signature/client"
"github.com/alibaba/inclavare-containers/shim/runtime/v2/rune"
"github.com/alibaba/inclavare-containers/shim/runtime/v2/rune/constants"
"github.com/containerd/containerd/mount"
"github.com/containerd/containerd/pkg/process"
taskAPI "github.com/containerd/containerd/runtime/v2/task"
specs "github.com/opencontainers/runtime-spec/specs-go"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// runE main flow.
func (s *service) carrierMain(req *taskAPI.CreateTaskRequest) (carrier.Carrier, error) {
var err error
var carr carrier.Carrier
defer func() {
if err != nil && carr != nil {
carr.Cleanup()
}
}()
found, carrierKind, err := getCarrierKind(req.Bundle)
if err != nil {
return carr, err
}
if !found {
emptycarr, _ := emptycarrier.NewEmptyCarrier()
return emptycarr, nil
}
switch carrierKind {
case rune.Occlum:
if carr, err = occlum.NewOcclumCarrier(s.context, req.Bundle); err != nil {
return nil, err
}
// mount rootfs
err = mountRootfs(req)
defer unmountRootfs(req)
if err != nil {
return carr, err
}
// mount oci defined mounts
err = mountOCIOnRootfs(req.Bundle)
defer unmountOCIOnRootfs(req.Bundle)
if err != nil {
return carr, err
}
case rune.Graphene:
carr, err = graphene.NewGrapheneCarrier()
case rune.Empty:
carr, err = emptycarrier.NewEmptyCarrier()
default:
return carr, rune.ErrorUnknownCarrier
}
if err != nil {
return carr, err
}
unsignedEnclave, err := carr.BuildUnsignedEnclave(req, &carrier.BuildUnsignedEnclaveArgs{
Bundle: req.Bundle,
})
if err != nil {
return carr, err
}
commonArgs := carrier.CommonArgs{
Enclave: unsignedEnclave,
Config: "", //TODO
}
signingMaterial, err := carr.GenerateSigningMaterial(req, &commonArgs)
if err != nil {
return carr, err
}
var signatureFile string
if carrierKind != rune.Empty {
//TODO: Retry on failture.
/*publicKey, signature, err := remoteSign("https://10.0.8.126:8443/api/v1/signature", commonArgs.Enclave)
defer os.RemoveAll(path.Dir(publicKey))*/
//FIXME mock signature
publicKey, signature, err := mockSign(signingMaterial)
if err != nil {
return carr, err
}
defer os.RemoveAll(path.Dir(publicKey))
commonArgs.Key = publicKey
signatureFile = signature
}
signedEnclave, err := carr.CascadeEnclaveSignature(req, &carrier.CascadeEnclaveSignatureArgs{
CommonArgs: commonArgs,
SigningMaterial: signingMaterial,
Signature: signatureFile,
})
if err != nil {
return carr, err
}
logrus.Debugf("Finished carrier: %v, signedEnclave: %s", carr, signedEnclave)
//FIXME debug
//if carrierKind == rune.Occlum {
// time.Sleep(time.Minute * 3)
//}
return carr, nil
}
func getCarrierKind(bundlePath string) (found bool, value rune.CarrierKind, err error) {
configPath := path.Join(bundlePath, "config.json")
var spec *specs.Spec
spec, err = config.LoadSpec(configPath)
if err != nil {
return
}
v, ok := config.GetEnv(spec, constants.EnvKeyRuneCarrier)
if !ok {
return
}
value = rune.CarrierKind(v)
if value == rune.Occlum || value == rune.Graphene || value == rune.Empty {
found = true
return
}
err = errors.Wrapf(rune.ErrorUnknownCarrier, "unexpected carrier kind: %v", value)
return
}
func mockSign(signingMaterialFile string) (publicKeyFile, signatureFile string, err error) {
dir, _ := ioutil.TempDir("/tmp", "signature-")
privateKeyFile := filepath.Join(dir, "private_key.pem")
publicKeyFile = filepath.Join(dir, "public_key.pem")
signatureFile = filepath.Join(dir, "signature.dat")
cmd := exec.Command("openssl", "genrsa", "-out", privateKeyFile, "-3", "3072")
if _, err = cmd.Output(); err != nil {
return
}
cmd = exec.Command("openssl", "rsa", "-in", privateKeyFile, "-pubout", "-out", publicKeyFile)
if _, err = cmd.Output(); err != nil {
return
}
cmd = exec.Command("openssl", "dgst", "-sha256", "-out", signatureFile, "-sign", privateKeyFile, "-keyform", "PEM", signingMaterialFile)
if _, err = cmd.Output(); err != nil {
return
}
return
}
func remoteSign(serverUrl, signingMaterial string) (publicKeyFile, signatureFile string, err error) {
su, err := url.Parse(serverUrl)
if err != nil {
return
}
sigClient := signclient.NewClient(signclient.PKCS1, su)
bytes, err := ioutil.ReadFile(signingMaterial)
if err != nil {
return
}
dir, err := ioutil.TempDir("/tmp", "signature-")
if err != nil {
return
}
signatureFile = filepath.Join(dir, "signature.dat")
publicKeyFile = filepath.Join(dir, "public_key.pem")
signature, publicKey, err := sigClient.Sign(bytes)
if err := ioutil.WriteFile(signatureFile, signature, 0644); err != nil {
return "", "", err
}
if err := ioutil.WriteFile(publicKeyFile, publicKey, 0644); err != nil {
return "", "", err
}
return
}
func mountRootfs(req *taskAPI.CreateTaskRequest) error {
var mounts []process.Mount
for _, m := range req.Rootfs {
mounts = append(mounts, process.Mount{
Type: m.Type,
Source: m.Source,
Target: m.Target,
Options: m.Options,
})
}
rootfs := ""
if len(mounts) > 0 {
rootfs = filepath.Join(req.Bundle, "rootfs")
if err := os.Mkdir(rootfs, 0711); err != nil && !os.IsExist(err) {
return err
}
}
for _, rm := range mounts {
m := &mount.Mount{
Type: rm.Type,
Source: rm.Source,
Options: rm.Options,
}
if err := m.Mount(rootfs); err != nil {
return errors.Wrapf(err, "failed to mount rootfs component %v", m)
}
logrus.Infof("mount success. src: %s, dst: %s, type: %s, options: %s", m.Source, rootfs, m.Type, m.Options)
}
return nil
}
func mountOCIOnRootfs(bundle string) error {
configPath := filepath.Join(bundle, "config.json")
spec, err := config.LoadSpec(configPath)
if err != nil {
return err
}
mounts := spec.Mounts
for _, rm := range mounts {
m := &mount.Mount{
Type: rm.Type,
Source: rm.Source,
Options: rm.Options,
}
target := filepath.Clean(filepath.Join(bundle, "rootfs", rm.Destination))
s, err := os.Stat(rm.Source)
if err != nil {
if os.IsNotExist(err) {
continue
}
return err
}
if s.IsDir() {
os.MkdirAll(target, s.Mode())
} else {
os.MkdirAll(path.Dir(target), 0644)
os.Create(target)
}
if err := m.Mount(target); err != nil {
return errors.Wrapf(err, "failed to mount rootfs component %v, err: %++v", m, err)
}
logrus.Infof("mount success. src: %s, dst: %s, type: %s, options: %s", m.Source, target, m.Type, m.Options)
}
return nil
}
func unmountOCIOnRootfs(bundle string) error {
configPath := filepath.Join(bundle, "config.json")
spec, err := config.LoadSpec(configPath)
if err != nil {
return err
}
mounts := spec.Mounts
for _, rm := range mounts {
target := filepath.Clean(filepath.Join(bundle, "rootfs", rm.Destination))
if err := mount.UnmountAll(target, 0); err != nil {
logrus.WithError(err).Warnf("failed to cleanup mount point %s", target)
}
}
return nil
}
func unmountRootfs(req *taskAPI.CreateTaskRequest) error {
rootfs := ""
if len(req.Rootfs) > 0 {
rootfs = filepath.Join(req.Bundle, "rootfs")
if err := os.Mkdir(rootfs, 0711); err != nil && !os.IsExist(err) {
return err
}
}
if err2 := mount.UnmountAll(rootfs, 0); err2 != nil {
logrus.WithError(err2).Warn("failed to cleanup rootfs mount")
}
return nil
}
此差异已折叠。
TAGS
tags
.*.swp
tomlcheck/tomlcheck
toml.test
language: go
go:
- 1.1
- 1.2
- 1.3
- 1.4
- 1.5
- 1.6
- tip
install:
- go install ./...
- go get github.com/BurntSushi/toml-test
script:
- export PATH="$PATH:$HOME/gopath/bin"
- make test
Compatible with TOML version
[v0.4.0](https://github.com/toml-lang/toml/blob/v0.4.0/versions/en/toml-v0.4.0.md)
The MIT License (MIT)
Copyright (c) 2013 TOML authors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
install:
go install ./...
test: install
go test -v
toml-test toml-test-decoder
toml-test -encoder toml-test-encoder
fmt:
gofmt -w *.go */*.go
colcheck *.go */*.go
tags:
find ./ -name '*.go' -print0 | xargs -0 gotags > TAGS
push:
git push origin master
git push github master
## TOML parser and encoder for Go with reflection
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
reflection interface similar to Go's standard library `json` and `xml`
packages. This package also supports the `encoding.TextUnmarshaler` and
`encoding.TextMarshaler` interfaces so that you can define custom data
representations. (There is an example of this below.)
Spec: https://github.com/toml-lang/toml
Compatible with TOML version
[v0.4.0](https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md)
Documentation: https://godoc.org/github.com/BurntSushi/toml
Installation:
```bash
go get github.com/BurntSushi/toml
```
Try the toml validator:
```bash
go get github.com/BurntSushi/toml/cmd/tomlv
tomlv some-toml-file.toml
```
[![Build Status](https://travis-ci.org/BurntSushi/toml.svg?branch=master)](https://travis-ci.org/BurntSushi/toml) [![GoDoc](https://godoc.org/github.com/BurntSushi/toml?status.svg)](https://godoc.org/github.com/BurntSushi/toml)
### Testing
This package passes all tests in
[toml-test](https://github.com/BurntSushi/toml-test) for both the decoder
and the encoder.
### Examples
This package works similarly to how the Go standard library handles `XML`
and `JSON`. Namely, data is loaded into Go values via reflection.
For the simplest example, consider some TOML file as just a list of keys
and values:
```toml
Age = 25
Cats = [ "Cauchy", "Plato" ]
Pi = 3.14
Perfection = [ 6, 28, 496, 8128 ]
DOB = 1987-07-05T05:45:00Z
```
Which could be defined in Go as:
```go
type Config struct {
Age int
Cats []string
Pi float64
Perfection []int
DOB time.Time // requires `import time`
}
```
And then decoded with:
```go
var conf Config
if _, err := toml.Decode(tomlData, &conf); err != nil {
// handle error
}
```
You can also use struct tags if your struct field name doesn't map to a TOML
key value directly:
```toml
some_key_NAME = "wat"
```
```go
type TOML struct {
ObscureKey string `toml:"some_key_NAME"`
}
```
### Using the `encoding.TextUnmarshaler` interface
Here's an example that automatically parses duration strings into
`time.Duration` values:
```toml
[[song]]
name = "Thunder Road"
duration = "4m49s"
[[song]]
name = "Stairway to Heaven"
duration = "8m03s"
```
Which can be decoded with:
```go
type song struct {
Name string
Duration duration
}
type songs struct {
Song []song
}
var favorites songs
if _, err := toml.Decode(blob, &favorites); err != nil {
log.Fatal(err)
}
for _, s := range favorites.Song {
fmt.Printf("%s (%s)\n", s.Name, s.Duration)
}
```
And you'll also need a `duration` type that satisfies the
`encoding.TextUnmarshaler` interface:
```go
type duration struct {
time.Duration
}
func (d *duration) UnmarshalText(text []byte) error {
var err error
d.Duration, err = time.ParseDuration(string(text))
return err
}
```
### More complex usage
Here's an example of how to load the example from the official spec page:
```toml
# This is a TOML document. Boom.
title = "TOML Example"
[owner]
name = "Tom Preston-Werner"
organization = "GitHub"
bio = "GitHub Cofounder & CEO\nLikes tater tots and beer."
dob = 1979-05-27T07:32:00Z # First class dates? Why not?
[database]
server = "192.168.1.1"
ports = [ 8001, 8001, 8002 ]
connection_max = 5000
enabled = true
[servers]
# You can indent as you please. Tabs or spaces. TOML don't care.
[servers.alpha]
ip = "10.0.0.1"
dc = "eqdc10"
[servers.beta]
ip = "10.0.0.2"
dc = "eqdc10"
[clients]
data = [ ["gamma", "delta"], [1, 2] ] # just an update to make sure parsers support it
# Line breaks are OK when inside arrays
hosts = [
"alpha",
"omega"
]
```
And the corresponding Go types are:
```go
type tomlConfig struct {
Title string
Owner ownerInfo
DB database `toml:"database"`
Servers map[string]server
Clients clients
}
type ownerInfo struct {
Name string
Org string `toml:"organization"`
Bio string
DOB time.Time
}
type database struct {
Server string
Ports []int
ConnMax int `toml:"connection_max"`
Enabled bool
}
type server struct {
IP string
DC string
}
type clients struct {
Data [][]interface{}
Hosts []string
}
```
Note that a case insensitive match will be tried if an exact match can't be
found.
A working example of the above can be found in `_examples/example.{go,toml}`.
package toml
import (
"fmt"
"io"
"io/ioutil"
"math"
"reflect"
"strings"
"time"
)
func e(format string, args ...interface{}) error {
return fmt.Errorf("toml: "+format, args...)
}
// Unmarshaler is the interface implemented by objects that can unmarshal a
// TOML description of themselves.
type Unmarshaler interface {
UnmarshalTOML(interface{}) error
}
// Unmarshal decodes the contents of `p` in TOML format into a pointer `v`.
func Unmarshal(p []byte, v interface{}) error {
_, err := Decode(string(p), v)
return err
}
// Primitive is a TOML value that hasn't been decoded into a Go value.
// When using the various `Decode*` functions, the type `Primitive` may
// be given to any value, and its decoding will be delayed.
//
// A `Primitive` value can be decoded using the `PrimitiveDecode` function.
//
// The underlying representation of a `Primitive` value is subject to change.
// Do not rely on it.
//
// N.B. Primitive values are still parsed, so using them will only avoid
// the overhead of reflection. They can be useful when you don't know the
// exact type of TOML data until run time.
type Primitive struct {
undecoded interface{}
context Key
}
// DEPRECATED!
//
// Use MetaData.PrimitiveDecode instead.
func PrimitiveDecode(primValue Primitive, v interface{}) error {
md := MetaData{decoded: make(map[string]bool)}
return md.unify(primValue.undecoded, rvalue(v))
}
// PrimitiveDecode is just like the other `Decode*` functions, except it
// decodes a TOML value that has already been parsed. Valid primitive values
// can *only* be obtained from values filled by the decoder functions,
// including this method. (i.e., `v` may contain more `Primitive`
// values.)
//
// Meta data for primitive values is included in the meta data returned by
// the `Decode*` functions with one exception: keys returned by the Undecoded
// method will only reflect keys that were decoded. Namely, any keys hidden
// behind a Primitive will be considered undecoded. Executing this method will
// update the undecoded keys in the meta data. (See the example.)
func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
md.context = primValue.context
defer func() { md.context = nil }()
return md.unify(primValue.undecoded, rvalue(v))
}
// Decode will decode the contents of `data` in TOML format into a pointer
// `v`.
//
// TOML hashes correspond to Go structs or maps. (Dealer's choice. They can be
// used interchangeably.)
//
// TOML arrays of tables correspond to either a slice of structs or a slice
// of maps.
//
// TOML datetimes correspond to Go `time.Time` values.
//
// All other TOML types (float, string, int, bool and array) correspond
// to the obvious Go types.
//
// An exception to the above rules is if a type implements the
// encoding.TextUnmarshaler interface. In this case, any primitive TOML value
// (floats, strings, integers, booleans and datetimes) will be converted to
// a byte string and given to the value's UnmarshalText method. See the
// Unmarshaler example for a demonstration with time duration strings.
//
// Key mapping
//
// TOML keys can map to either keys in a Go map or field names in a Go
// struct. The special `toml` struct tag may be used to map TOML keys to
// struct fields that don't match the key name exactly. (See the example.)
// A case insensitive match to struct names will be tried if an exact match
// can't be found.
//
// The mapping between TOML values and Go values is loose. That is, there
// may exist TOML values that cannot be placed into your representation, and
// there may be parts of your representation that do not correspond to
// TOML values. This loose mapping can be made stricter by using the IsDefined
// and/or Undecoded methods on the MetaData returned.
//
// This decoder will not handle cyclic types. If a cyclic type is passed,
// `Decode` will not terminate.
func Decode(data string, v interface{}) (MetaData, error) {
rv := reflect.ValueOf(v)
if rv.Kind() != reflect.Ptr {
return MetaData{}, e("Decode of non-pointer %s", reflect.TypeOf(v))
}
if rv.IsNil() {
return MetaData{}, e("Decode of nil %s", reflect.TypeOf(v))
}
p, err := parse(data)
if err != nil {
return MetaData{}, err
}
md := MetaData{
p.mapping, p.types, p.ordered,
make(map[string]bool, len(p.ordered)), nil,
}
return md, md.unify(p.mapping, indirect(rv))
}
// DecodeFile is just like Decode, except it will automatically read the
// contents of the file at `fpath` and decode it for you.
func DecodeFile(fpath string, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadFile(fpath)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// DecodeReader is just like Decode, except it will consume all bytes
// from the reader and decode it for you.
func DecodeReader(r io.Reader, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadAll(r)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// unify performs a sort of type unification based on the structure of `rv`,
// which is the client representation.
//
// Any type mismatch produces an error. Finding a type that we don't know
// how to handle produces an unsupported type error.
func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
// Special case. Look for a `Primitive` value.
if rv.Type() == reflect.TypeOf((*Primitive)(nil)).Elem() {
// Save the undecoded data and the key context into the primitive
// value.
context := make(Key, len(md.context))
copy(context, md.context)
rv.Set(reflect.ValueOf(Primitive{
undecoded: data,
context: context,
}))
return nil
}
// Special case. Unmarshaler Interface support.
if rv.CanAddr() {
if v, ok := rv.Addr().Interface().(Unmarshaler); ok {
return v.UnmarshalTOML(data)
}
}
// Special case. Handle time.Time values specifically.
// TODO: Remove this code when we decide to drop support for Go 1.1.
// This isn't necessary in Go 1.2 because time.Time satisfies the encoding
// interfaces.
if rv.Type().AssignableTo(rvalue(time.Time{}).Type()) {
return md.unifyDatetime(data, rv)
}
// Special case. Look for a value satisfying the TextUnmarshaler interface.
if v, ok := rv.Interface().(TextUnmarshaler); ok {
return md.unifyText(data, v)
}
// BUG(burntsushi)
// The behavior here is incorrect whenever a Go type satisfies the
// encoding.TextUnmarshaler interface but also corresponds to a TOML
// hash or array. In particular, the unmarshaler should only be applied
// to primitive TOML values. But at this point, it will be applied to
// all kinds of values and produce an incorrect error whenever those values
// are hashes or arrays (including arrays of tables).
k := rv.Kind()
// laziness
if k >= reflect.Int && k <= reflect.Uint64 {
return md.unifyInt(data, rv)
}
switch k {
case reflect.Ptr:
elem := reflect.New(rv.Type().Elem())
err := md.unify(data, reflect.Indirect(elem))
if err != nil {
return err
}
rv.Set(elem)
return nil
case reflect.Struct:
return md.unifyStruct(data, rv)
case reflect.Map:
return md.unifyMap(data, rv)
case reflect.Array:
return md.unifyArray(data, rv)
case reflect.Slice:
return md.unifySlice(data, rv)
case reflect.String:
return md.unifyString(data, rv)
case reflect.Bool:
return md.unifyBool(data, rv)
case reflect.Interface:
// we only support empty interfaces.
if rv.NumMethod() > 0 {
return e("unsupported type %s", rv.Type())
}
return md.unifyAnything(data, rv)
case reflect.Float32:
fallthrough
case reflect.Float64:
return md.unifyFloat64(data, rv)
}
return e("unsupported type %s", rv.Kind())
}
func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if mapping == nil {
return nil
}
return e("type mismatch for %s: expected table but found %T",
rv.Type().String(), mapping)
}
for key, datum := range tmap {
var f *field
fields := cachedTypeFields(rv.Type())
for i := range fields {
ff := &fields[i]
if ff.name == key {
f = ff
break
}
if f == nil && strings.EqualFold(ff.name, key) {
f = ff
}
}
if f != nil {
subv := rv
for _, i := range f.index {
subv = indirect(subv.Field(i))
}
if isUnifiable(subv) {
md.decoded[md.context.add(key).String()] = true
md.context = append(md.context, key)
if err := md.unify(datum, subv); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
} else if f.name != "" {
// Bad user! No soup for you!
return e("cannot write unexported field %s.%s",
rv.Type().String(), f.name)
}
}
}
return nil
}
func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if tmap == nil {
return nil
}
return badtype("map", mapping)
}
if rv.IsNil() {
rv.Set(reflect.MakeMap(rv.Type()))
}
for k, v := range tmap {
md.decoded[md.context.add(k).String()] = true
md.context = append(md.context, k)
rvkey := indirect(reflect.New(rv.Type().Key()))
rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
if err := md.unify(v, rvval); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
rvkey.SetString(k)
rv.SetMapIndex(rvkey, rvval)
}
return nil
}
func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
sliceLen := datav.Len()
if sliceLen != rv.Len() {
return e("expected array length %d; got TOML array of length %d",
rv.Len(), sliceLen)
}
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
n := datav.Len()
if rv.IsNil() || rv.Cap() < n {
rv.Set(reflect.MakeSlice(rv.Type(), n, n))
}
rv.SetLen(n)
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
sliceLen := data.Len()
for i := 0; i < sliceLen; i++ {
v := data.Index(i).Interface()
sliceval := indirect(rv.Index(i))
if err := md.unify(v, sliceval); err != nil {
return err
}
}
return nil
}
func (md *MetaData) unifyDatetime(data interface{}, rv reflect.Value) error {
if _, ok := data.(time.Time); ok {
rv.Set(reflect.ValueOf(data))
return nil
}
return badtype("time.Time", data)
}
func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
if s, ok := data.(string); ok {
rv.SetString(s)
return nil
}
return badtype("string", data)
}
func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
if num, ok := data.(float64); ok {
switch rv.Kind() {
case reflect.Float32:
fallthrough
case reflect.Float64:
rv.SetFloat(num)
default:
panic("bug")
}
return nil
}
return badtype("float", data)
}
func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
if num, ok := data.(int64); ok {
if rv.Kind() >= reflect.Int && rv.Kind() <= reflect.Int64 {
switch rv.Kind() {
case reflect.Int, reflect.Int64:
// No bounds checking necessary.
case reflect.Int8:
if num < math.MinInt8 || num > math.MaxInt8 {
return e("value %d is out of range for int8", num)
}
case reflect.Int16:
if num < math.MinInt16 || num > math.MaxInt16 {
return e("value %d is out of range for int16", num)
}
case reflect.Int32:
if num < math.MinInt32 || num > math.MaxInt32 {
return e("value %d is out of range for int32", num)
}
}
rv.SetInt(num)
} else if rv.Kind() >= reflect.Uint && rv.Kind() <= reflect.Uint64 {
unum := uint64(num)
switch rv.Kind() {
case reflect.Uint, reflect.Uint64:
// No bounds checking necessary.
case reflect.Uint8:
if num < 0 || unum > math.MaxUint8 {
return e("value %d is out of range for uint8", num)
}
case reflect.Uint16:
if num < 0 || unum > math.MaxUint16 {
return e("value %d is out of range for uint16", num)
}
case reflect.Uint32:
if num < 0 || unum > math.MaxUint32 {
return e("value %d is out of range for uint32", num)
}
}
rv.SetUint(unum)
} else {
panic("unreachable")
}
return nil
}
return badtype("integer", data)
}
func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
if b, ok := data.(bool); ok {
rv.SetBool(b)
return nil
}
return badtype("boolean", data)
}
func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
rv.Set(reflect.ValueOf(data))
return nil
}
func (md *MetaData) unifyText(data interface{}, v TextUnmarshaler) error {
var s string
switch sdata := data.(type) {
case TextMarshaler:
text, err := sdata.MarshalText()
if err != nil {
return err
}
s = string(text)
case fmt.Stringer:
s = sdata.String()
case string:
s = sdata
case bool:
s = fmt.Sprintf("%v", sdata)
case int64:
s = fmt.Sprintf("%d", sdata)
case float64:
s = fmt.Sprintf("%f", sdata)
default:
return badtype("primitive (string-like)", data)
}
if err := v.UnmarshalText([]byte(s)); err != nil {
return err
}
return nil
}
// rvalue returns a reflect.Value of `v`. All pointers are resolved.
func rvalue(v interface{}) reflect.Value {
return indirect(reflect.ValueOf(v))
}
// indirect returns the value pointed to by a pointer.
// Pointers are followed until the value is not a pointer.
// New values are allocated for each nil pointer.
//
// An exception to this rule is if the value satisfies an interface of
// interest to us (like encoding.TextUnmarshaler).
func indirect(v reflect.Value) reflect.Value {
if v.Kind() != reflect.Ptr {
if v.CanSet() {
pv := v.Addr()
if _, ok := pv.Interface().(TextUnmarshaler); ok {
return pv
}
}
return v
}
if v.IsNil() {
v.Set(reflect.New(v.Type().Elem()))
}
return indirect(reflect.Indirect(v))
}
func isUnifiable(rv reflect.Value) bool {
if rv.CanSet() {
return true
}
if _, ok := rv.Interface().(TextUnmarshaler); ok {
return true
}
return false
}
func badtype(expected string, data interface{}) error {
return e("cannot load TOML value of type %T into a Go %s", data, expected)
}
package toml
import "strings"
// MetaData allows access to meta information about TOML data that may not
// be inferrable via reflection. In particular, whether a key has been defined
// and the TOML type of a key.
type MetaData struct {
mapping map[string]interface{}
types map[string]tomlType
keys []Key
decoded map[string]bool
context Key // Used only during decoding.
}
// IsDefined returns true if the key given exists in the TOML data. The key
// should be specified hierarchially. e.g.,
//
// // access the TOML key 'a.b.c'
// IsDefined("a", "b", "c")
//
// IsDefined will return false if an empty key given. Keys are case sensitive.
func (md *MetaData) IsDefined(key ...string) bool {
if len(key) == 0 {
return false
}
var hash map[string]interface{}
var ok bool
var hashOrVal interface{} = md.mapping
for _, k := range key {
if hash, ok = hashOrVal.(map[string]interface{}); !ok {
return false
}
if hashOrVal, ok = hash[k]; !ok {
return false
}
}
return true
}
// Type returns a string representation of the type of the key specified.
//
// Type will return the empty string if given an empty key or a key that
// does not exist. Keys are case sensitive.
func (md *MetaData) Type(key ...string) string {
fullkey := strings.Join(key, ".")
if typ, ok := md.types[fullkey]; ok {
return typ.typeString()
}
return ""
}
// Key is the type of any TOML key, including key groups. Use (MetaData).Keys
// to get values of this type.
type Key []string
func (k Key) String() string {
return strings.Join(k, ".")
}
func (k Key) maybeQuotedAll() string {
var ss []string
for i := range k {
ss = append(ss, k.maybeQuoted(i))
}
return strings.Join(ss, ".")
}
func (k Key) maybeQuoted(i int) string {
quote := false
for _, c := range k[i] {
if !isBareKeyChar(c) {
quote = true
break
}
}
if quote {
return "\"" + strings.Replace(k[i], "\"", "\\\"", -1) + "\""
}
return k[i]
}
func (k Key) add(piece string) Key {
newKey := make(Key, len(k)+1)
copy(newKey, k)
newKey[len(k)] = piece
return newKey
}
// Keys returns a slice of every key in the TOML data, including key groups.
// Each key is itself a slice, where the first element is the top of the
// hierarchy and the last is the most specific.
//
// The list will have the same order as the keys appeared in the TOML data.
//
// All keys returned are non-empty.
func (md *MetaData) Keys() []Key {
return md.keys
}
// Undecoded returns all keys that have not been decoded in the order in which
// they appear in the original TOML document.
//
// This includes keys that haven't been decoded because of a Primitive value.
// Once the Primitive value is decoded, the keys will be considered decoded.
//
// Also note that decoding into an empty interface will result in no decoding,
// and so no keys will be considered decoded.
//
// In this sense, the Undecoded keys correspond to keys in the TOML document
// that do not have a concrete type in your representation.
func (md *MetaData) Undecoded() []Key {
undecoded := make([]Key, 0, len(md.keys))
for _, key := range md.keys {
if !md.decoded[key.String()] {
undecoded = append(undecoded, key)
}
}
return undecoded
}
/*
Package toml provides facilities for decoding and encoding TOML configuration
files via reflection. There is also support for delaying decoding with
the Primitive type, and querying the set of keys in a TOML document with the
MetaData type.
The specification implemented: https://github.com/toml-lang/toml
The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify
whether a file is a valid TOML document. It can also be used to print the
type of each key in a TOML document.
Testing
There are two important types of tests used for this package. The first is
contained inside '*_test.go' files and uses the standard Go unit testing
framework. These tests are primarily devoted to holistically testing the
decoder and encoder.
The second type of testing is used to verify the implementation's adherence
to the TOML specification. These tests have been factored into their own
project: https://github.com/BurntSushi/toml-test
The reason the tests are in a separate project is so that they can be used by
any implementation of TOML. Namely, it is language agnostic.
*/
package toml
此差异已折叠。
// +build go1.2
package toml
// In order to support Go 1.1, we define our own TextMarshaler and
// TextUnmarshaler types. For Go 1.2+, we just alias them with the
// standard library interfaces.
import (
"encoding"
)
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler encoding.TextMarshaler
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler encoding.TextUnmarshaler
// +build !go1.2
package toml
// These interfaces were introduced in Go 1.2, so we add them manually when
// compiling for Go 1.1.
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler interface {
MarshalText() (text []byte, err error)
}
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler interface {
UnmarshalText(text []byte) error
}
此差异已折叠。
此差异已折叠。
au BufWritePost *.go silent!make tags > /dev/null 2>&1
package toml
// tomlType represents any Go type that corresponds to a TOML type.
// While the first draft of the TOML spec has a simplistic type system that
// probably doesn't need this level of sophistication, we seem to be militating
// toward adding real composite types.
type tomlType interface {
typeString() string
}
// typeEqual accepts any two types and returns true if they are equal.
func typeEqual(t1, t2 tomlType) bool {
if t1 == nil || t2 == nil {
return false
}
return t1.typeString() == t2.typeString()
}
func typeIsHash(t tomlType) bool {
return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
}
type tomlBaseType string
func (btype tomlBaseType) typeString() string {
return string(btype)
}
func (btype tomlBaseType) String() string {
return btype.typeString()
}
var (
tomlInteger tomlBaseType = "Integer"
tomlFloat tomlBaseType = "Float"
tomlDatetime tomlBaseType = "Datetime"
tomlString tomlBaseType = "String"
tomlBool tomlBaseType = "Bool"
tomlArray tomlBaseType = "Array"
tomlHash tomlBaseType = "Hash"
tomlArrayHash tomlBaseType = "ArrayHash"
)
// typeOfPrimitive returns a tomlType of any primitive value in TOML.
// Primitive values are: Integer, Float, Datetime, String and Bool.
//
// Passing a lexer item other than the following will cause a BUG message
// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
func (p *parser) typeOfPrimitive(lexItem item) tomlType {
switch lexItem.typ {
case itemInteger:
return tomlInteger
case itemFloat:
return tomlFloat
case itemDatetime:
return tomlDatetime
case itemString:
return tomlString
case itemMultilineString:
return tomlString
case itemRawString:
return tomlString
case itemRawMultilineString:
return tomlString
case itemBool:
return tomlBool
}
p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
panic("unreachable")
}
// typeOfArray returns a tomlType for an array given a list of types of its
// values.
//
// In the current spec, if an array is homogeneous, then its type is always
// "Array". If the array is not homogeneous, an error is generated.
func (p *parser) typeOfArray(types []tomlType) tomlType {
// Empty arrays are cool.
if len(types) == 0 {
return tomlArray
}
theType := types[0]
for _, t := range types[1:] {
if !typeEqual(theType, t) {
p.panicf("Array contains values of type '%s' and '%s', but "+
"arrays must be homogeneous.", theType, t)
}
}
return tomlArray
}
package toml
// Struct field handling is adapted from code in encoding/json:
//
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the Go distribution.
import (
"reflect"
"sort"
"sync"
)
// A field represents a single field found in a struct.
type field struct {
name string // the name of the field (`toml` tag included)
tag bool // whether field has a `toml` tag
index []int // represents the depth of an anonymous field
typ reflect.Type // the type of the field
}
// byName sorts field by name, breaking ties with depth,
// then breaking ties with "name came from toml tag", then
// breaking ties with index sequence.
type byName []field
func (x byName) Len() int { return len(x) }
func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byName) Less(i, j int) bool {
if x[i].name != x[j].name {
return x[i].name < x[j].name
}
if len(x[i].index) != len(x[j].index) {
return len(x[i].index) < len(x[j].index)
}
if x[i].tag != x[j].tag {
return x[i].tag
}
return byIndex(x).Less(i, j)
}
// byIndex sorts field by index sequence.
type byIndex []field
func (x byIndex) Len() int { return len(x) }
func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byIndex) Less(i, j int) bool {
for k, xik := range x[i].index {
if k >= len(x[j].index) {
return false
}
if xik != x[j].index[k] {
return xik < x[j].index[k]
}
}
return len(x[i].index) < len(x[j].index)
}
// typeFields returns a list of fields that TOML should recognize for the given
// type. The algorithm is breadth-first search over the set of structs to
// include - the top struct and then any reachable anonymous structs.
func typeFields(t reflect.Type) []field {
// Anonymous fields to explore at the current level and the next.
current := []field{}
next := []field{{typ: t}}
// Count of queued names for current level and the next.
count := map[reflect.Type]int{}
nextCount := map[reflect.Type]int{}
// Types already visited at an earlier level.
visited := map[reflect.Type]bool{}
// Fields found.
var fields []field
for len(next) > 0 {
current, next = next, current[:0]
count, nextCount = nextCount, map[reflect.Type]int{}
for _, f := range current {
if visited[f.typ] {
continue
}
visited[f.typ] = true
// Scan f.typ for fields to include.
for i := 0; i < f.typ.NumField(); i++ {
sf := f.typ.Field(i)
if sf.PkgPath != "" && !sf.Anonymous { // unexported
continue
}
opts := getOptions(sf.Tag)
if opts.skip {
continue
}
index := make([]int, len(f.index)+1)
copy(index, f.index)
index[len(f.index)] = i
ft := sf.Type
if ft.Name() == "" && ft.Kind() == reflect.Ptr {
// Follow pointer.
ft = ft.Elem()
}
// Record found field and index sequence.
if opts.name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
tagged := opts.name != ""
name := opts.name
if name == "" {
name = sf.Name
}
fields = append(fields, field{name, tagged, index, ft})
if count[f.typ] > 1 {
// If there were multiple instances, add a second,
// so that the annihilation code will see a duplicate.
// It only cares about the distinction between 1 or 2,
// so don't bother generating any more copies.
fields = append(fields, fields[len(fields)-1])
}
continue
}
// Record new anonymous struct to explore in next round.
nextCount[ft]++
if nextCount[ft] == 1 {
f := field{name: ft.Name(), index: index, typ: ft}
next = append(next, f)
}
}
}
}
sort.Sort(byName(fields))
// Delete all fields that are hidden by the Go rules for embedded fields,
// except that fields with TOML tags are promoted.
// The fields are sorted in primary order of name, secondary order
// of field index length. Loop over names; for each name, delete
// hidden fields by choosing the one dominant field that survives.
out := fields[:0]
for advance, i := 0, 0; i < len(fields); i += advance {
// One iteration per name.
// Find the sequence of fields with the name of this first field.
fi := fields[i]
name := fi.name
for advance = 1; i+advance < len(fields); advance++ {
fj := fields[i+advance]
if fj.name != name {
break
}
}
if advance == 1 { // Only one field with this name
out = append(out, fi)
continue
}
dominant, ok := dominantField(fields[i : i+advance])
if ok {
out = append(out, dominant)
}
}
fields = out
sort.Sort(byIndex(fields))
return fields
}
// dominantField looks through the fields, all of which are known to
// have the same name, to find the single field that dominates the
// others using Go's embedding rules, modified by the presence of
// TOML tags. If there are multiple top-level fields, the boolean
// will be false: This condition is an error in Go and we skip all
// the fields.
func dominantField(fields []field) (field, bool) {
// The fields are sorted in increasing index-length order. The winner
// must therefore be one with the shortest index length. Drop all
// longer entries, which is easy: just truncate the slice.
length := len(fields[0].index)
tagged := -1 // Index of first tagged field.
for i, f := range fields {
if len(f.index) > length {
fields = fields[:i]
break
}
if f.tag {
if tagged >= 0 {
// Multiple tagged fields at the same level: conflict.
// Return no field.
return field{}, false
}
tagged = i
}
}
if tagged >= 0 {
return fields[tagged], true
}
// All remaining fields have the same length. If there's more than one,
// we have a conflict (two fields named "X" at the same level) and we
// return no field.
if len(fields) > 1 {
return field{}, false
}
return fields[0], true
}
var fieldCache struct {
sync.RWMutex
m map[reflect.Type][]field
}
// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
func cachedTypeFields(t reflect.Type) []field {
fieldCache.RLock()
f := fieldCache.m[t]
fieldCache.RUnlock()
if f != nil {
return f
}
// Compute fields without lock.
// Might duplicate effort but won't hold other computations back.
f = typeFields(t)
if f == nil {
f = []field{}
}
fieldCache.Lock()
if fieldCache.m == nil {
fieldCache.m = map[reflect.Type][]field{}
}
fieldCache.m[t] = f
fieldCache.Unlock()
return f
}
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
package winio
//go:generate go run $GOROOT/src/syscall/mksyscall_windows.go -output zsyscall_windows.go file.go pipe.go sd.go fileinfo.go privilege.go backup.go hvsock.go
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册