提交 f251bbca 编写于 作者: J jingxiaolu

isula-build: initial commit

Initial commit for isula-build
Signed-off-by: Njingxiaolu <lujingxiao@huawei.com>
上级 dd038a0a
# Copyright (c) Huawei Technologies Co., Ltd. 2020. All rights reserved.
# isula-build licensed under the Mulan PSL v2.
# You can use this software according to the terms and conditions of the Mulan PSL v2.
# You may obtain a copy of Mulan PSL v2 at:
# http://license.coscl.org.cn/MulanPSL2
# THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR
# PURPOSE.
# See the Mulan PSL v2 for more details.
# Author: Feiyu Yang
# Create: 2020-03-01
# Description: Dockerfile for generating proto files
FROM euleros
ARG arch
ENV GIT_SSL_NO_VERIFY true
# set the yum.repo
RUN echo -e "[base]\n\
name=EulerOS-2.0SP3 base\n\
baseurl=https://developer.huawei.com/ict/site-euleros/euleros/repo/yum/2.3/os/${arch}/\n\
enabled=1\n\
gpgcheck=1\n\
gpgkey=http://developer.huawei.com/ict/site-euleros/euleros/repo/yum/2.3/os/RPM-GPG-KEY-EulerOS\n" \
> /etc/yum.repos.d/euleros.repo
# install building tools
RUN yum makecache && yum install -y \
golang automake libtool glibc-headers gcc-c++ make
# set GO env
ENV GOPATH /go
ENV PATH $PATH:/go/bin:/usr/local/go/bin
ENV GO111MODULE off
# install google/protobuf
ENV PROTOBUF_PROTOC_COMMIT 7faab5eeebf6aa62d89bf6b3cc1eaea711dea192
ENV PROTOBUF_VERSION 3.11.2
RUN curl -fkL https://github.com/google/protobuf/releases/download/v${PROTOBUF_VERSION}/protobuf-cpp-${PROTOBUF_VERSION}.tar.gz \
| tar -zxC /opt && cd /opt/protobuf-${PROTOBUF_VERSION} \
&& ./autogen.sh && ./configure && make && make install
# install gogo/protobuf
ENV GOGO_PROTOBUF 1.3.1
RUN mkdir -p $GOPATH/src/github.com/gogo/protobuf \
&& curl -fkL https://github.com/gogo/protobuf/archive/v${GOGO_PROTOBUF}.tar.gz \
| tar -zxC $GOPATH/src/github.com/gogo/protobuf --strip-components 1 \
&& cd $GOPATH/src/github.com/gogo/protobuf/protoc-gen-gogo && go build . && go install
# add agent repository
ADD . ${GOPATH}/src/isula.org/isula-build
# default working dir should be agent dir
WORKDIR ${GOPATH}/src/isula.org/isula-build
ENTRYPOINT ["bash", "-c"]
CMD ["/bin/bash"]
木兰宽松许可证, 第2版
木兰宽松许可证, 第2版
2020年1月 http://license.coscl.org.cn/MulanPSL2
您对“软件”的复制、使用、修改及分发受木兰宽松许可证,第2版(“本许可证”)的如下条款的约束:
0. 定义
“软件”是指由“贡献”构成的许可在“本许可证”下的程序和相关文档的集合。
“贡献”是指由任一“贡献者”许可在“本许可证”下的受版权法保护的作品。
“贡献者”是指将受版权法保护的作品许可在“本许可证”下的自然人或“法人实体”。
“法人实体”是指提交贡献的机构及其“关联实体”。
“关联实体”是指,对“本许可证”下的行为方而言,控制、受控制或与其共同受控制的机构,此处的控制是指有受控方或共同受控方至少50%直接或间接的投票权、资金或其他有价证券。
1. 授予版权许可
每个“贡献者”根据“本许可证”授予您永久性的、全球性的、免费的、非独占的、不可撤销的版权许可,您可以复制、使用、修改、分发其“贡献”,不论修改与否。
2. 授予专利许可
每个“贡献者”根据“本许可证”授予您永久性的、全球性的、免费的、非独占的、不可撤销的(根据本条规定撤销除外)专利许可,供您制造、委托制造、使用、许诺销售、销售、进口其“贡献”或以其他方式转移其“贡献”。前述专利许可仅限于“贡献者”现在或将来拥有或控制的其“贡献”本身或其“贡献”与许可“贡献”时的“软件”结合而将必然会侵犯的专利权利要求,不包括对“贡献”的修改或包含“贡献”的其他结合。如果您或您的“关联实体”直接或间接地,就“软件”或其中的“贡献”对任何人发起专利侵权诉讼(包括反诉或交叉诉讼)或其他专利维权行动,指控其侵犯专利权,则“本许可证”授予您对“软件”的专利许可自您提起诉讼或发起维权行动之日终止。
3. 无商标许可
“本许可证”不提供对“贡献者”的商品名称、商标、服务标志或产品名称的商标许可,但您为满足第4条规定的声明义务而必须使用除外。
4. 分发限制
您可以在任何媒介中将“软件”以源程序形式或可执行形式重新分发,不论修改与否,但您必须向接收者提供“本许可证”的副本,并保留“软件”中的版权、商标、专利及免责声明。
5. 免责声明与责任限制
“软件”及其中的“贡献”在提供时不带任何明示或默示的担保。在任何情况下,“贡献者”或版权所有者不对任何人因使用“软件”或其中的“贡献”而引发的任何直接或间接损失承担责任,不论因何种原因导致或者基于何种法律理论,即使其曾被建议有此种损失的可能性。
6. 语言
“本许可证”以中英文双语表述,中英文版本具有同等法律效力。如果中英文版本存在任何冲突不一致,以中文版为准。
条款结束
如何将木兰宽松许可证,第2版,应用到您的软件
如果您希望将木兰宽松许可证,第2版,应用到您的新软件,为了方便接收者查阅,建议您完成如下三步:
1, 请您补充如下声明中的空白,包括软件名、软件的首次发表年份以及您作为版权人的名字;
2, 请您在软件包的一级目录下创建以“LICENSE”为名的文件,将整个许可证文本放入该文件中;
3, 请将如下声明文本放入每个源文件的头部注释中。
Copyright (c) [Year] [name of copyright holder]
[Software Name] is licensed under Mulan PSL v2.
You can use this software according to the terms and conditions of the Mulan PSL v2.
You may obtain a copy of Mulan PSL v2 at:
http://license.coscl.org.cn/MulanPSL2
THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
See the Mulan PSL v2 for more details.
Mulan Permissive Software License,Version 2
Mulan Permissive Software License,Version 2 (Mulan PSL v2)
January 2020 http://license.coscl.org.cn/MulanPSL2
Your reproduction, use, modification and distribution of the Software shall be subject to Mulan PSL v2 (this License) with the following terms and conditions:
0. Definition
Software means the program and related documents which are licensed under this License and comprise all Contribution(s).
Contribution means the copyrightable work licensed by a particular Contributor under this License.
Contributor means the Individual or Legal Entity who licenses its copyrightable work under this License.
Legal Entity means the entity making a Contribution and all its Affiliates.
Affiliates means entities that control, are controlled by, or are under common control with the acting entity under this License, ‘control’ means direct or indirect ownership of at least fifty percent (50%) of the voting power, capital or other securities of controlled or commonly controlled entity.
1. Grant of Copyright License
Subject to the terms and conditions of this License, each Contributor hereby grants to you a perpetual, worldwide, royalty-free, non-exclusive, irrevocable copyright license to reproduce, use, modify, or distribute its Contribution, with modification or not.
2. Grant of Patent License
Subject to the terms and conditions of this License, each Contributor hereby grants to you a perpetual, worldwide, royalty-free, non-exclusive, irrevocable (except for revocation under this Section) patent license to make, have made, use, offer for sale, sell, import or otherwise transfer its Contribution, where such patent license is only limited to the patent claims owned or controlled by such Contributor now or in future which will be necessarily infringed by its Contribution alone, or by combination of the Contribution with the Software to which the Contribution was contributed. The patent license shall not apply to any modification of the Contribution, and any other combination which includes the Contribution. If you or your Affiliates directly or indirectly institute patent litigation (including a cross claim or counterclaim in a litigation) or other patent enforcement activities against any individual or entity by alleging that the Software or any Contribution in it infringes patents, then any patent license granted to you under this License for the Software shall terminate as of the date such litigation or activity is filed or taken.
3. No Trademark License
No trademark license is granted to use the trade names, trademarks, service marks, or product names of Contributor, except as required to fulfill notice requirements in Section 4.
4. Distribution Restriction
You may distribute the Software in any medium with or without modification, whether in source or executable forms, provided that you provide recipients with a copy of this License and retain copyright, patent, trademark and disclaimer statements in the Software.
5. Disclaimer of Warranty and Limitation of Liability
THE SOFTWARE AND CONTRIBUTION IN IT ARE PROVIDED WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED. IN NO EVENT SHALL ANY CONTRIBUTOR OR COPYRIGHT HOLDER BE LIABLE TO YOU FOR ANY DAMAGES, INCLUDING, BUT NOT LIMITED TO ANY DIRECT, OR INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING FROM YOUR USE OR INABILITY TO USE THE SOFTWARE OR THE CONTRIBUTION IN IT, NO MATTER HOW IT’S CAUSED OR BASED ON WHICH LEGAL THEORY, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
6. Language
THIS LICENSE IS WRITTEN IN BOTH CHINESE AND ENGLISH, AND THE CHINESE VERSION AND ENGLISH VERSION SHALL HAVE THE SAME LEGAL EFFECT. IN THE CASE OF DIVERGENCE BETWEEN THE CHINESE AND ENGLISH VERSIONS, THE CHINESE VERSION SHALL PREVAIL.
END OF THE TERMS AND CONDITIONS
How to Apply the Mulan Permissive Software License,Version 2 (Mulan PSL v2) to Your Software
To apply the Mulan PSL v2 to your work, for easy identification by recipients, you are suggested to complete following three steps:
i Fill in the blanks in following statement, including insert your software name, the year of the first publication of your software, and your name identified as the copyright owner;
ii Create a file named “LICENSE” which contains the whole context of this License in the first directory of your software package;
iii Attach the statement to the appropriate annotated syntax at the beginning of each source file.
Copyright (c) [Year] [name of copyright holder]
[Software Name] is licensed under Mulan PSL v2.
You can use this software according to the terms and conditions of the Mulan PSL v2.
You may obtain a copy of Mulan PSL v2 at:
http://license.coscl.org.cn/MulanPSL2
THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
See the Mulan PSL v2 for more details.
此差异已折叠。
PREFIX := /usr
BINDIR := $(PREFIX)/bin
SOURCES := $(shell find . 2>&1 | grep -E '.*\.(c|h|go)$$')
GIT_COMMIT ?= $(if $(shell git rev-parse --short HEAD),$(shell git rev-parse --short HEAD),$(error "git failed"))
SOURCE_DATE_EPOCH ?= $(if $(shell date +%s),$(shell date +%s),$(error "date failed"))
VERSION := $(shell cat ./VERSION)
ARCH := $(shell arch)
EXTRALDFLAGS :=
LDFLAGS := -X isula.org/isula-build/pkg/version.GitCommit=$(GIT_COMMIT) \
-X isula.org/isula-build/pkg/version.BuildInfo=$(SOURCE_DATE_EPOCH) \
-X isula.org/isula-build/pkg/version.Version=$(VERSION) \
$(EXTRALDFLAGS)
BUILDTAGS := seccomp
BUILDFLAGS := -tags "$(BUILDTAGS)"
SAFEBUILDFLAGS := -w -s -buildid=IdByIsula -buildmode=pie -extldflags=-static -extldflags=-zrelro -extldflags=-znow $(LDFLAGS)
IMAGE_BUILDARGS := $(if $(http_proxy), --build-arg http_proxy=$(http_proxy))
IMAGE_BUILDARGS += $(if $(https_proxy), --build-arg https_proxy=$(https_proxy))
IMAGE_BUILDARGS += --build-arg arch=$(ARCH)
IMAGE_NAME := isula-build-dev
GO := go
# test for go module support
ifeq ($(shell go help mod >/dev/null 2>&1 && echo true), true)
export GO_BUILD=GO111MODULE=on; $(GO) mod vendor; $(GO) build -mod=vendor
else
export GO_BUILD=$(GO) build
endif
all: isula-build isula-builder
.PHONY: isula-build
isula-build: ./cmd/cli
@echo "Making isula-build..."
$(GO_BUILD) -ldflags '$(LDFLAGS)' -o bin/isula-build $(BUILDFLAGS) ./cmd/cli
@echo "isula-build done!"
.PHONY: isula-builder
isula-builder: ./cmd/daemon
@echo "Making isula-builder..."
$(GO_BUILD) -ldflags '$(LDFLAGS)' -o bin/isula-builder $(BUILDFLAGS) ./cmd/daemon
@echo "isula-builder done!"
.PHONY: safe
safe:
@echo "Safe building isula-build..."
$(GO_BUILD) -ldflags '$(SAFEBUILDFLAGS)' -o bin/isula-build $(BUILDFLAGS) ./cmd/cli
$(GO_BUILD) -ldflags '$(SAFEBUILDFLAGS)' -o bin/isula-builder $(BUILDFLAGS) ./cmd/daemon
@echo "Safe build isula-build done!"
.PHONY: debug
debug:
@echo "Debug building isula-build..."
@cp -f ./hack/profiling ./daemon/profiling.go
$(GO_BUILD) -ldflags '$(SAFEBUILDFLAGS)' -o bin/isula-build $(BUILDFLAGS) ./cmd/cli
$(GO_BUILD) -ldflags '$(SAFEBUILDFLAGS)' -o bin/isula-builder $(BUILDFLAGS) ./cmd/daemon
@rm -f ./daemon/profiling.go
@echo "Debug build isula-build done!"
.PHONY: build-image
build-image:
isula-build ctr-img build -f Dockerfile.proto ${IMAGE_BUILDARGS} -o isulad:${IMAGE_NAME}:latest .
tests: test-integration test-unit
.PHONY: test-integration
test-integration:
@echo "Integration test starting..."
@./hack/dockerfile_tests.sh
@echo "Integration test done!"
.PHONY: test-unit
test-unit:
@echo "Unit test starting..."
@./hack/unit_test.sh
@echo "Unit test done!"
.PHONY: proto
proto:
@echo "Generating protobuf..."
isula run -i --rm --runtime runc -v ${PWD}:/go/src/isula.org/isula-build ${IMAGE_NAME} ./hack/generate_proto.sh
@echo "Protobuf files have been generated!"
.PHONY: install
install:
install -D -m0755 bin/isula-build $(BINDIR)
install -D -m0755 bin/isula-builder $(BINDIR)
.PHONY: checkall
checkall:
@echo "Static check start for whole project"
@./hack/static_check.sh all
@echo "Static check project finished"
.PHONY: check
check:
@echo "Static check start for last commit"
@./hack/static_check.sh last
@echo "Static check last commit finished"
.PHONY: clean
clean:
rm -rf ./bin
# isula-build
#### Description
isula build kit for building container images
#### Software Architecture
Software architecture description
#### Installation
1. xxxx
2. xxxx
3. xxxx
#### Instructions
1. xxxx
2. xxxx
3. xxxx
#### Contribution
1. Fork the repository
2. Create Feat_xxx branch
3. Commit your code
4. Create Pull Request
#### Gitee Feature
1. You can use Readme\_XXX.md to support different languages, such as Readme\_en.md, Readme\_zh.md
2. Gitee blog [blog.gitee.com](https://blog.gitee.com)
3. Explore open source project [https://gitee.com/explore](https://gitee.com/explore)
4. The most valuable open source project [GVP](https://gitee.com/gvp)
5. The manual of Gitee [https://gitee.com/help](https://gitee.com/help)
6. The most popular members [https://gitee.com/gitee-stars/](https://gitee.com/gitee-stars/)
# isula-build # isula-build
#### 介绍 isula-build is a tool provided by iSula team for building container images. It can quickly build the container image according to the given `Dockerfile`.
isula build kit for building container images
#### 软件架构 The binary file `isula-build` is a CLI tool and `isula-builder` runs as a daemon responding all the requests from client.
软件架构说明
It provides a command line tool that can be used to
#### 安装教程 - build an image from a Dockerfile
- list all images in local store
- remove specified images
1. xxxx We also
2. xxxx
3. xxxx
#### 使用说明 - be compatible with Dockerfile grammar
- support extended file attributes, e.g., linux security, IMA, EVM, user, trusted
- support different image formats, e.g., docker-archive, isulad
1. xxxx
2. xxxx
3. xxxx
#### 参与贡献 ## Getting Started
1. Fork 本仓库 ### Install on openEuler
2. 新建 Feat_xxx 分支
3. 提交代码
4. 新建 Pull Request
#### Install from source
#### 码云特技 For compiling from source on openEuler, these packages are required on your OS:
1. 使用 Readme\_XXX.md 来支持不同的语言,例如 Readme\_en.md, Readme\_zh.md - make
2. 码云官方博客 [blog.gitee.com](https://blog.gitee.com) - golang (version 1.13 or higher)
3. 你可以 [https://gitee.com/explore](https://gitee.com/explore) 这个地址来了解码云上的优秀开源项目 - btrfs-progs-devel
4. [GVP](https://gitee.com/gvp) 全称是码云最有价值开源项目,是码云综合评定出的优秀开源项目 - device-mapper-devel
5. 码云官方提供的使用手册 [https://gitee.com/help](https://gitee.com/help) - glib2-devel
6. 码云封面人物是一档用来展示码云会员风采的栏目 [https://gitee.com/gitee-stars/](https://gitee.com/gitee-stars/) - gpgme-devel
- libassuan-devel
- libseccomp-devel
- git
- bzip2
- go-md2man
- systemd-devel
You can install them on openEuler with `yum`:
```sh
sudo yum install make btrfs-progs-devel device-mapper-devel glib2-devel gpgme-devel libassuan-devel libseccomp-devel git bzip2 go-md2man systemd-devel golang
```
Get the source code with `git`:
```sh
git clone https://gitee.com/openeuler/isula-build.git
```
Please note that `isula-build` uses Go Modules to manage vendoring packages. Before compiling please make sure you can connect to the default goproxy server.
If you are working behind a proxy, please refer to [Go Module Proxy](https://proxy.golang.org) by setting `GOPROXY=yourproxy`.
Enter the source code directory and begin compiling:
```sh
cd isula-build
sudo make
```
After compiling success, you can install the binaries of `isula-build` to `/usr/bin/` simply with:
```sh
sudo make install
```
To run the server of `isula-build` for the first time, the default configuration files should be installed:
```sh
sudo mkdir -p /etc/isula-build/ && \
install -p -m 600 ./cmd/daemon/config/configuration.toml /etc/isula-build/configuration.toml && \
install -p -m 600 ./cmd/daemon/config/storage.toml /etc/isula-build/storage.toml && \
install -p -m 600 ./cmd/daemon/config/registries.toml /etc/isula-build/registries.toml && \
install -p -m 600 ./cmd/daemon/config/policy.json /etc/isula-build/policy.json
```
#### Install as RPM package
`isula-build` is integrated with `openeuler/isula-kits`, for details on how to compile and install `isula-build` as RPM package, please refer to `isula-kits`.
### Run the daemon server
#### Run as system service
To manage `isula-builder` by systemd, please refer to following steps:
```sh
sudo install -p -m 640 ./isula-build.service /etc/systemd/system/isula-build.service
sudo systemctl enable isula-build
sudo systemctl start isula-build
```
### Example on building container images
#### Requirements
For building container images, `runc` is required.
You can get `runc` by the help of installing `docker` or `docker-runc` on your openEuler distro by:
```sh
sudo yum install docker
```
or
```sh
sudo yum install docker-runc
```
#### Building image
Here is an example for building a container image, for more details please refer to [usage](./doc/usage.md).
Create a simple buildDir and write the Dockerfile
```dockerfile
FROM alpine:latest
LABEL foo=bar
COPY ./* /home/dir1/
```
Build the image in the buildDir
```sh
$ sudo isula-build ctr-img build -f Dockerfile .
STEP 1: FROM alpine:latest
STEP 2: LABEL foo=bar
STEP 3: COPY ./* /home/dir1/
Getting image source signatures
Copying blob sha256:e9235582825a2691b1c91a96580e358c99acfd48082cbf1b92fd2ba4a791efc3
Copying blob sha256:dc3bca97af8b81508c343b13a08493c7809b474dc25986fcbae90c6722201be3
Copying config sha256:9ec92a8819f9da1b06ea9ff83307ff859af2959b70bfab101f6a325b1a211549
Writing manifest to image destination
Storing signatures
Build success with image id: 9ec92a8819f9da1b06ea9ff83307ff859af2959b70bfab101f6a325b1a211549
```
#### Listing images
```sh
$ sudo isula-build ctr-img images
----------------- ----------- ---------------- ----------------------------------------------
REPOSITORY TAG IMAGE ID CREATED
------------------ ---------- ---------------- ----------------------------------------------
<none> latest 9ec92a8819f9 2020-06-11 07:45:39.265106109 +0000 UTC
```
#### Removing image
```sh
$ sudo isula-build ctr-img rm 9ec92a8819f9
Deleted: sha256:86567f7a01b04c662a9657aac436e8d63ecebb26da4252abb016d177721fa11b
```
### Integrates with iSulad or docker
Integrates with `iSulad` or `docker` are listed in [integration](./doc/integration.md).
## Precautions
Constraints, limitations and the differences from `docker build` are listed in [precautions](./doc/precautions.md).
## How to Contribute
We are happy to provide guidance for the new contributors.
Please sign the [CLA](https://openeuler.org/en/cla.html) before contributing.
## Licensing
isula-build is licensed under the Mulan PSL v2.
此差异已折叠。
// Copyright (c) Huawei Technologies Co., Ltd. 2020. All rights reserved.
// isula-build licensed under the Mulan PSL v1.
// You can use this software according to the terms and conditions of the Mulan PSL v1.
// You may obtain a copy of Mulan PSL v2 at:
// http://license.coscl.org.cn/MulanPSL
// THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR
// PURPOSE.
// See the Mulan PSL v1 for more details.
// Description: iSulad Build
// Author: Jingxiao Lu
// Create: 2020-01-19
syntax = "proto3";
package isula.build.v1;
import "google/protobuf/empty.proto";
import "google/protobuf/timestamp.proto";
service Control {
// Build requests a new image building
rpc Build(BuildRequest) returns (stream BuildResponse);
// Status pipes the image building process log back to client
rpc Status(StatusRequest) returns (stream StatusResponse);
// List lists all images in isula-builder
rpc List(ListRequest) returns (ListResponse);
// Version requests the version information of isula-builder
rpc Version(google.protobuf.Empty) returns (VersionResponse);
// Remove sends an image remove request to isula-builder
rpc Remove(RemoveRequest) returns (stream RemoveResponse);
// HealthCheck requests a health checking in isula-builder
rpc HealthCheck(google.protobuf.Empty) returns (HealthCheckResponse);
// Login requests to access image registry with username and password
rpc Login(LoginRequest) returns (LoginResponse);
// Logout requests to logout registry and delete any credentials
rpc Logout(LogoutRequest) returns (LogoutResponse);
// Load requests an image tar load
rpc Load(LoadRequest) returns (LoadResponse);
}
message BuildRequest {
// buildType is the type of this build action
string buildType = 1;
// buildID is an unique id for this building process
string buildID = 2;
// contextDir is the working directory of building context
string contextDir = 3;
// fileContent is the content of Dockerfile
string fileContent = 4;
// output is the way of exporting built image
string output = 5;
// buildArgs are args for this building
repeated string buildArgs = 6;
// proxy marks for whether inherit proxy environments from host
bool proxy = 7;
// iidfile is the file client writes image ID to
string iidfile = 8;
// buildStatic is used to hold the options for static build
BuildStatic buildStatic = 9;
// encryptKey is key to encrypt items in buildArgs
string encryptKey = 10;
}
message BuildStatic {
// buildTime is a fixed time for binary equivalence build
google.protobuf.Timestamp buildTime = 1;
}
message BuildResponse {
// imageID is the ID of built image
string imageID = 1;
// data pipes the output stream back to client
bytes data = 2;
}
message StatusRequest {
// buildID is an unique id for this building process, same with BuildRequest
string buildID = 1;
}
message StatusResponse {
// content pipes the image building process log back to client
string content = 1;
}
message ListRequest {
// imageName lists specific images with imageName
string imageName = 1;
}
message ListResponse {
message ImageInfo {
string repository = 1;
string tag = 2;
string id = 3;
string created = 4;
string size = 5;
}
// ImageInfo carries the basic info of an image
repeated ImageInfo images = 1;
}
message VersionResponse {
// version is isula-builder version
string version = 1;
// goVersion is the golang version userd for compiling isula-builder
string goVersion = 2;
// gitCommit is the git commit ID for the compiled isula-builder
string gitCommit = 3;
// buildTime is the time when compiling isula-builder
string buildTime = 4;
// osArch is the arch the isula-builder built on
string osArch = 5;
}
message RemoveRequest {
// imageID is the images to be deleted
repeated string imageID = 1;
// all tells isula-builder to delete all images
bool all = 2;
// prune tells isula-builder to delete all untagge images
bool prune = 3;
}
message RemoveResponse {
// layerMessage is response message indicate the images deleted successfully or error occured
string layerMessage = 1;
}
message HealthCheckResponse {
enum ServingStatus {
UNKNOWN = 0;
SERVING = 1;
NOT_SERVING = 2;
}
// status is the health status of isula-builder
ServingStatus status = 1;
}
message LoginRequest {
// server is registry address will login
string server = 1;
// username is username to login
string username = 2;
// password is password to login
string password = 3;
// key is aes key used for encrypt and decrypt password
string key = 4;
}
message LoginResponse {
// login response sent to front-end
string content = 1;
}
message LogoutRequest {
// server to logout
string server = 1;
// logout from all registries
bool all = 2;
}
message LogoutResponse {
// logout response sent to front-end
string result = 1;
}
message LoadRequest {
// path is the path of loading file
string path = 1;
}
message LoadResponse {
// imageID is the ID of loaded image
string imageID = 1;
}
// Copyright (c) Huawei Technologies Co., Ltd. 2020. All rights reserved.
// isula-build licensed under the Mulan PSL v2.
// You can use this software according to the terms and conditions of the Mulan PSL v2.
// You may obtain a copy of Mulan PSL v2 at:
// http://license.coscl.org.cn/MulanPSL2
// THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR
// PURPOSE.
// See the Mulan PSL v2 for more details.
// Author: Jingxiao Lu
// Create: 2020-03-20
// Description: Builder related functions
// Package builder includes Builder related functions
package builder
import (
"context"
"github.com/pkg/errors"
constant "isula.org/isula-build"
pb "isula.org/isula-build/api/services"
"isula.org/isula-build/builder/dockerfile"
"isula.org/isula-build/exporter"
"isula.org/isula-build/store"
)
// Builder is an interface for building an image
type Builder interface {
Build() (imageID string, err error)
StatusChan() <-chan string
CleanResources() error
OutputPipeWrapper() *exporter.PipeWrapper
}
// NewBuilder init a builder
func NewBuilder(ctx context.Context, store store.Store, req *pb.BuildRequest, runtimePath, buildDir, runDir string) (Builder, error) {
switch req.GetBuildType() {
case constant.BuildContainerImageType:
return dockerfile.NewBuilder(ctx, store, req, runtimePath, buildDir, runDir)
default:
return nil, errors.Errorf("the build type %q is not supported", req.GetBuildType())
}
}
// Copyright (c) Huawei Technologies Co., Ltd. 2020. All rights reserved.
// isula-build licensed under the Mulan PSL v2.
// You can use this software according to the terms and conditions of the Mulan PSL v2.
// You may obtain a copy of Mulan PSL v2 at:
// http://license.coscl.org.cn/MulanPSL2
// THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR
// PURPOSE.
// See the Mulan PSL v2 for more details.
// Author: Zhongkai Lei, Feiyu Yang
// Create: 2020-03-20
// Description: ADD and COPY command related functions
package dockerfile
import (
"os"
"path/filepath"
"strconv"
"strings"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/chrootarchive"
"github.com/containers/storage/pkg/fileutils"
"github.com/containers/storage/pkg/idtools"
securejoin "github.com/cyphar/filepath-securejoin"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
constant "isula.org/isula-build"
"isula.org/isula-build/image"
"isula.org/isula-build/util"
)
type copyDetails map[string][]string
type copyOptions struct {
// is ADD command
isAdd bool
// raw string of "--chown" value
chown string
ignore []string
// path get from request, or mountpoint of image or stage through "--from"
contextDir string
copyDetails copyDetails
}
type addOptions struct {
// matcher matches the file which should be ignored
matcher *fileutils.PatternMatcher
chownPair idtools.IDPair
// extract is true and the tar file should be extracted
extract bool
}
// resolveCopyDest gets the secure dest path and check validity
func resolveCopyDest(rawDest, workDir, mountpoint string) (string, error) {
// in normal cases, the value of workdir obtained here must be an absolute path
dest := util.MakeAbsolute(rawDest, workDir)
secureMp, err := securejoin.SecureJoin("", mountpoint)
if err != nil {
return "", errors.Wrapf(err, "failed to resolve symlinks for mountpoint %s", mountpoint)
}
secureDest, err := securejoin.SecureJoin(secureMp, dest)
if err != nil {
return "", errors.Wrapf(err, "failed to resolve symlinks for destination %s", dest)
}
// ensure the destination path is in the mountpoint
if !strings.HasPrefix(secureDest, secureMp) {
return "", errors.Errorf("failed to resolve copy destination %s", rawDest)
}
if util.HasSlash(rawDest) && secureDest[len(secureDest)-1] != os.PathSeparator {
secureDest += string(os.PathSeparator)
}
return secureDest, nil
}
// resolveCopySource gets the secureSource
func resolveCopySource(isAdd bool, rawSources []string, dest, contextDir string) (copyDetails, error) {
details := make(copyDetails, len(rawSources))
for _, src := range rawSources {
if strings.HasPrefix(src, "http://") || strings.HasPrefix(src, "https://") {
if !isAdd {
return nil, errors.Errorf("source can not be a URL for COPY")
}
if _, ok := details[dest]; !ok {
details[dest] = []string{}
}
details[dest] = append(details[dest], src)
continue
}
secureSrc, err := securejoin.SecureJoin(contextDir, src)
if err != nil {
return nil, errors.Wrapf(err, "%q is outside of the build context dir %q", src, secureSrc)
}
// if the destination is a folder, we may need to get the correct name
// when the source file is a symlink
if util.HasSlash(dest) {
_, srcName := filepath.Split(src)
_, SecureName := filepath.Split(secureSrc)
if srcName != SecureName {
newDest := filepath.Join(dest, srcName)
if _, ok := details[newDest]; !ok {
details[newDest] = []string{}
}
details[newDest] = append(details[newDest], secureSrc)
continue
}
}
if _, ok := details[dest]; !ok {
details[dest] = []string{}
}
details[dest] = append(details[dest], secureSrc)
}
return details, nil
}
// getCopyContextDir gets the contextDir of from stage or image
func (c *cmdBuilder) getCopyContextDir(from string) (string, func(), error) {
// the "from" parameter is a stage name
if i, ok := c.stage.builder.stageAliasMap[from]; ok {
c.stage.builder.Logger().
Debugf("Get context dir by stage name %q, context dir %q", from, c.stage.builder.stageBuilders[i].mountpoint)
return c.stage.builder.stageBuilders[i].mountpoint, nil, nil
}
// try to consider of "from" as a stage index
index, err := strconv.Atoi(from)
if err == nil {
if index >= 0 && index < c.stage.position {
logrus.Debugf("Get context dir by stage index %q, context dir %q", index, c.stage.builder.stageBuilders[index].mountpoint)
return c.stage.builder.stageBuilders[index].mountpoint, nil, nil
}
}
// "from" is neither name nor index of stage, consider that "from" is image description
imgDesc, err := prepareImage(&image.PrepareImageOptions{
Ctx: c.ctx,
FromImage: from,
SystemContext: c.stage.buildOpt.systemContext,
Store: c.stage.localStore,
Reporter: c.stage.builder.cliLog,
})
if err != nil {
return "", nil, err
}
cleanup := func() {
if _, uerr := c.stage.localStore.Unmount(imgDesc.ContainerDesc.ContainerID, false); uerr != nil {
logrus.Warnf("Unmount layer[%s] for COPY from[%s] failed: %v", imgDesc.ContainerDesc.ContainerID, from, uerr)
}
if derr := c.stage.localStore.DeleteContainer(imgDesc.ContainerDesc.ContainerID); derr != nil {
logrus.Warnf("Delete layer[%s] for COPY from[%s] failed: %v", imgDesc.ContainerDesc.ContainerID, from, derr)
}
}
return imgDesc.ContainerDesc.Mountpoint, cleanup, nil
}
func (c *cmdBuilder) doCopy(opt *copyOptions) error {
c.stage.builder.Logger().Debugf("copyOptions is %#v", opt)
matcher, err := util.GetIgnorePatternMatcher(opt.ignore, opt.contextDir, filepath.Dir(c.stage.mountpoint))
if err != nil {
return err
}
chownPair, err := util.GetChownOptions(opt.chown, c.stage.mountpoint)
if err != nil {
return err
}
addOption := &addOptions{
matcher: matcher,
chownPair: chownPair,
extract: opt.isAdd,
}
for dest, srcs := range opt.copyDetails {
for _, src := range srcs {
if err = c.add(src, dest, addOption); err != nil {
return err
}
}
}
return nil
}
func (c *cmdBuilder) executeAddAndCopy(isAdd bool) error {
allowFlags := map[string]bool{"chown": true}
if !isAdd {
allowFlags["from"] = true
}
// 1. get "--chown" , "--from" and args from c.line
flags, args := getFlagsAndArgs(c.line, allowFlags)
if len(args) < 2 {
return errors.Errorf("the COPY/ADD args must contain at least src and dest")
}
// 2. get trustworthy and absolute destination
dest := args[len(args)-1]
// if there are multiple sources, dest must be a directory.
if len(args) > 2 && !util.HasSlash(dest) {
return errors.Errorf("%q is not a directory", dest)
}
finalDest, err := resolveCopyDest(dest, c.stage.docker.Config.WorkingDir, c.stage.mountpoint)
if err != nil {
return err
}
// 3. get context dir
contextDir := c.stage.builder.buildOpts.ContextDir
if from, ok := flags["from"]; ok {
var cleanup func()
if contextDir, cleanup, err = c.getCopyContextDir(from); err != nil {
return err
}
defer func() {
if cleanup != nil {
cleanup()
}
}()
}
// 4. get all of secure sources
details, err := resolveCopySource(isAdd, args[:len(args)-1], finalDest, contextDir)
if err != nil {
return err
}
var chown string
if flag, ok := flags["chown"]; ok {
chown = flag
}
// 5. do copy
copyOpt := &copyOptions{
isAdd: isAdd,
chown: chown,
ignore: c.stage.builder.ignores,
contextDir: contextDir,
copyDetails: details,
}
return c.doCopy(copyOpt)
}
func addDirectory(realSrc, dest string, opt *addOptions) error {
if err := idtools.MkdirAllAndChownNew(dest, constant.DefaultSharedDirMode, opt.chownPair); err != nil {
return errors.Wrapf(err, "error creating directory %q", dest)
}
logrus.Debugf("Copying directory from %q to %q", realSrc, dest)
return filepath.Walk(realSrc, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if matched, merr := util.IsMatched(opt.matcher, path); merr != nil || matched {
if info.IsDir() {
merr = filepath.SkipDir
}
return merr
}
relativePath, rerr := filepath.Rel(realSrc, path)
if rerr != nil {
return rerr
}
destPath := filepath.Join(dest, relativePath)
if info.IsDir() {
// if the dest path is a file, remove it first
fi, err := os.Lstat(destPath)
if err == nil && !fi.IsDir() {
if err := os.Remove(destPath); err != nil {
return err
}
}
return idtools.MkdirAllAndChownNew(destPath, info.Mode(), opt.chownPair)
}
if util.IsSymbolFile(path) {
return util.CopySymbolFile(path, destPath, opt.chownPair)
}
return util.CopyFile(path, destPath, opt.chownPair)
})
}
// addFile adds a single file, if extract is true and src is a archive file,
// extract it into dest, or treat it as a normal file.
func addFile(realSrc, globFile, dest string, opt *addOptions) error {
if opt.matcher != nil {
if matched, err := util.IsMatched(opt.matcher, realSrc); matched || err != nil {
return err
}
}
if !opt.extract || !archive.IsArchivePath(realSrc) {
if strings.HasSuffix(dest, string(os.PathSeparator)) || util.IsDirectory(dest) {
dest = filepath.Join(dest, filepath.Base(globFile))
}
logrus.Debugf("Copying single file from %q to %q", realSrc, dest)
return util.CopyFile(realSrc, dest, opt.chownPair)
}
// The src is an archive file and extract is true,so extract it
logrus.Debugf("Extracting from %q to %q", realSrc, dest)
extractArchive := chrootarchive.UntarPathAndChown(nil, nil, nil, nil)
return extractArchive(realSrc, dest)
}
func (c *cmdBuilder) add(src, dest string, opt *addOptions) error {
// the src is URL
if strings.HasPrefix(src, "http://") || strings.HasPrefix(src, "https://") {
return errors.New("URL is not support yet")
}
globFiles, err := filepath.Glob(src)
if err != nil {
return errors.Wrapf(err, "failed to get the glob by %q", src)
}
if len(globFiles) == 0 {
return errors.Errorf("found no file that matches %q", src)
}
for _, globFile := range globFiles {
realSrc, err := filepath.EvalSymlinks(globFile)
if err != nil {
return err
}
realSrcFileInfo, err := os.Stat(realSrc)
if err != nil {
return err
}
// if it is directory,walk and continue
if realSrcFileInfo.IsDir() {
if err = addDirectory(realSrc, dest, opt); err != nil {
return err
}
continue
}
// realSrc is a single file
if err = addFile(realSrc, globFile, dest, opt); err != nil {
return err
}
}
return nil
}
// Copyright (c) Huawei Technologies Co., Ltd. 2020. All rights reserved.
// isula-build licensed under the Mulan PSL v2.
// You can use this software according to the terms and conditions of the Mulan PSL v2.
// You may obtain a copy of Mulan PSL v2 at:
// http://license.coscl.org.cn/MulanPSL2
// THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR
// PURPOSE.
// See the Mulan PSL v2 for more details.
// Author: Zhongkai Lei, Feiyu Yang
// Create: 2020-03-20
// Description: ADD and COPY command related functions tests
package dockerfile
import (
"fmt"
"io/ioutil"
"math/rand"
"os"
"os/exec"
"path/filepath"
"reflect"
"testing"
"github.com/containers/storage/pkg/idtools"
securejoin "github.com/cyphar/filepath-securejoin"
"gotest.tools/assert"
constant "isula.org/isula-build"
"isula.org/isula-build/util"
)
type testEnvironment struct {
contextDir string
workDir string
mountpoint string
}
func prepareResolveDestEnvironment(prepareCmd, workdir, mountpoint string) (*testEnvironment, string) {
testDir, _ := ioutil.TempDir("", "test_add_copy")
realMountpoint := filepath.Join(testDir, "mountpoint")
os.MkdirAll(realMountpoint, constant.DefaultRootDirMode)
if mountpoint != realMountpoint {
realMountpoint, _ = securejoin.SecureJoin("", mountpoint)
}
contextDir := filepath.Join(testDir, "contextDir")
os.Chdir(testDir)
if prepareCmd != "" {
cmd := exec.Command("/bin/sh", "-c", prepareCmd)
err := cmd.Run()
fmt.Println(err)
}
realWorkDir := filepath.Join(testDir, mountpoint, workdir)
os.MkdirAll(realWorkDir, constant.DefaultRootDirMode)
os.Chdir(realWorkDir)
return &testEnvironment{
contextDir: contextDir,
workDir: workdir,
mountpoint: realMountpoint,
}, testDir
}
func prepareResolveSourceEnvironment(prepareCmd, contextDir string) {
os.Chdir(contextDir)
if prepareCmd != "" {
cmd := exec.Command("/bin/sh", "-c", prepareCmd)
cmd.Run()
}
}
func TestResolveCopyDest(t *testing.T) {
type args struct {
rawDest string
workDir string
mountpoint string
}
tests := []struct {
name string
args args
want string
wantErr bool
prepareCmd string
}{
{
args: args{
rawDest: "foo/bar",
workDir: "/workdir",
mountpoint: "mountpoint",
},
},
{
args: args{
rawDest: "foo/bar",
workDir: "",
mountpoint: "mountpoint",
},
},
{
args: args{
rawDest: "foo/bar",
workDir: ".",
mountpoint: "mountpoint",
},
},
{
args: args{
rawDest: "foo/bar/",
workDir: ".",
mountpoint: "mountpoint",
},
},
{
args: args{
rawDest: ".",
workDir: "/foo",
mountpoint: "mountpoint",
},
},
{
args: args{
rawDest: "foo/bar",
workDir: ".",
mountpoint: "mountPointSoft",
},
prepareCmd: "ln -sf mountpoint mountPointSoft",
},
}
for _, tt := range tests {
testEnvironment, testDir := prepareResolveDestEnvironment(tt.prepareCmd, tt.args.workDir, tt.args.mountpoint)
t.Run(tt.name, func(t *testing.T) {
got, err := resolveCopyDest(tt.args.rawDest, testEnvironment.workDir, testEnvironment.mountpoint)
if (err != nil) != tt.wantErr {
t.Errorf("resolveCopyDest() error = %v, wantErr %v", err, tt.wantErr)
return
}
tt.want = filepath.Join(testEnvironment.mountpoint, testEnvironment.workDir, tt.args.rawDest)
if tt.args.rawDest[len(tt.args.rawDest)-1] == os.PathSeparator {
tt.want += string(os.PathSeparator)
}
if got != tt.want {
t.Errorf("resolveCopyDest() got = %v, want %v", got, tt.want)
}
})
os.RemoveAll(testDir)
}
}
func TestResolveCopySource(t *testing.T) {
testDir := "/tmp/TestResolveCopySource/"
testContextDir := filepath.Join(testDir, "testContextDir")
type args struct {
isAdd bool
rawSources []string
dest string
contextDir string
}
tests := []struct {
name string
args args
prepareCmd string
want copyDetails
wantErr bool
}{
{
args: args{
isAdd: true,
rawSources: []string{"http://foo/bar", "https://foo/bar"},
dest: "/foo/bar",
},
want: copyDetails{
"/foo/bar": []string{"http://foo/bar", "https://foo/bar"},
},
},
{
args: args{
isAdd: false,
rawSources: []string{"http://foo/bar", "https://foo/bar"},
dest: "/foo/bar",
},
wantErr: true,
},
{
args: args{
isAdd: true,
rawSources: []string{"foo", "bar"},
dest: "/foo/bar/",
},
prepareCmd: "touch foo && touch bar",
want: copyDetails{
"/foo/bar/": []string{filepath.Join(testContextDir, "foo"), filepath.Join(testContextDir, "bar")},
},
wantErr: false,
},
{
args: args{
isAdd: true,
rawSources: []string{"foo", "bar"},
dest: "/foo/bar/",
},
prepareCmd: "touch foo && ln -sf foo bar",
want: copyDetails{
"/foo/bar/": []string{filepath.Join(testContextDir, "foo")},
"/foo/bar/bar": []string{filepath.Join(testContextDir, "foo")},
},
wantErr: false,
},
{
args: args{
isAdd: true,
rawSources: []string{"foo", "bar"},
dest: "/foo/bar",
},
prepareCmd: "touch foo && ln -sf foo bar",
want: copyDetails{
"/foo/bar": []string{filepath.Join(testContextDir, "foo"), filepath.Join(testContextDir, "foo")},
},
wantErr: false,
},
{
args: args{
isAdd: true,
rawSources: []string{"bar"},
dest: "/foo/bar/",
},
prepareCmd: "touch ../foo && ln -sf ../foo bar",
want: copyDetails{
"/foo/bar/bar": []string{filepath.Join(testContextDir, "foo")},
},
wantErr: false,
},
}
for _, tt := range tests {
err := os.MkdirAll(testContextDir, constant.DefaultRootDirMode)
assert.NilError(t, err)
t.Run(tt.name, func(t *testing.T) {
prepareResolveSourceEnvironment(tt.prepareCmd, testContextDir)
got, err := resolveCopySource(tt.args.isAdd, tt.args.rawSources, tt.args.dest, testContextDir)
if (err != nil) != tt.wantErr {
t.Errorf("resolveCopySource() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("resolveCopySource() got = %v, want %v", got, tt.want)
}
})
os.RemoveAll(testDir)
}
}
func TestAddFile(t *testing.T) {
realSrc := fmt.Sprintf("/tmp/test-%d", rand.Int())
dest := fmt.Sprintf("/tmp/test2-%d", rand.Int())
err := exec.Command("/bin/sh", "-c", "touch "+realSrc).Run()
assert.NilError(t, err)
opt := &addOptions{}
err = addFile(realSrc, realSrc, dest, opt)
assert.NilError(t, err)
_, err = os.Stat(dest)
assert.NilError(t, err)
err = os.Remove(realSrc)
assert.NilError(t, err)
err = os.Remove(dest)
assert.NilError(t, err)
tarFile := fmt.Sprintf("/tmp/a-%d.tar.gz", rand.Int())
srcFile1 := fmt.Sprintf("/tmp/test-%d", rand.Int())
srcFile2 := fmt.Sprintf("/tmp/test2-%d", rand.Int())
err = exec.Command("/bin/sh", "-c", "touch "+srcFile1+" "+srcFile2+
" && tar -czf "+tarFile+" "+srcFile1+" "+srcFile2).Run()
assert.NilError(t, err)
opt.extract = true
err = addFile(tarFile, tarFile, dest, opt)
assert.NilError(t, err)
fi, err := os.Stat(dest)
assert.NilError(t, err)
assert.Equal(t, fi.IsDir(), true)
fi, err = os.Stat(dest + srcFile1)
assert.NilError(t, err)
assert.Equal(t, fi.Name(), filepath.Base(srcFile1))
err = os.RemoveAll(dest)
assert.NilError(t, err)
err = os.Remove(srcFile1)
assert.NilError(t, err)
err = os.Remove(srcFile2)
assert.NilError(t, err)
err = os.Remove(tarFile)
assert.NilError(t, err)
}
func TestAdd(t *testing.T) {
ignores := []string{"a", "b"}
contextDir := fmt.Sprintf("/tmp/context-%d", rand.Int())
contextDir2 := fmt.Sprintf("/tmp/context-%d", rand.Int())
matcher, err := util.GetIgnorePatternMatcher(ignores, contextDir, "")
assert.NilError(t, err)
file1 := contextDir + "/a"
file2 := contextDir + "/b"
dir := contextDir + "/dir"
file3 := dir + "/c"
err = exec.Command("/bin/sh", "-c", "mkdir -p "+contextDir+" && touch "+file1+" "+file2).Run()
assert.NilError(t, err)
err = exec.Command("/bin/sh", "-c", "mkdir -p "+dir+" && touch "+file3).Run()
assert.NilError(t, err)
c := cmdBuilder{}
opt := &addOptions{
matcher: matcher,
chownPair: idtools.IDPair{UID: 1000, GID: 1001},
extract: true,
}
err = c.add(contextDir+"/*", contextDir2+"/", opt)
assert.NilError(t, err)
_, err = os.Stat(contextDir2 + "/" + filepath.Base(file1))
assert.Equal(t, os.IsNotExist(err), true)
_, err = os.Stat(contextDir2 + "/" + filepath.Base(file2))
assert.Equal(t, os.IsNotExist(err), true)
_, err = os.Stat(contextDir2 + "/" + filepath.Base(dir))
assert.Equal(t, os.IsNotExist(err), true)
_, err = os.Stat(contextDir2 + "/" + filepath.Base(file3))
assert.NilError(t, err)
err = os.RemoveAll(contextDir)
assert.NilError(t, err)
err = os.RemoveAll(contextDir2)
assert.NilError(t, err)
}
// Copyright (c) Huawei Technologies Co., Ltd. 2020. All rights reserved.
// isula-build licensed under the Mulan PSL v2.
// You can use this software according to the terms and conditions of the Mulan PSL v2.
// You may obtain a copy of Mulan PSL v2 at:
// http://license.coscl.org.cn/MulanPSL2
// THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR
// PURPOSE.
// See the Mulan PSL v2 for more details.
// Author: iSula Team
// Create: 2020-03-20
// Description: Builder related functions
package dockerfile
import (
"bytes"
"context"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"regexp"
"sort"
"strings"
"time"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
constant "isula.org/isula-build"
pb "isula.org/isula-build/api/services"
dockerfile "isula.org/isula-build/builder/dockerfile/parser"
"isula.org/isula-build/exporter"
"isula.org/isula-build/image"
"isula.org/isula-build/pkg/logger"
"isula.org/isula-build/pkg/parser"
"isula.org/isula-build/store"
"isula.org/isula-build/util"
)
// BuildOptions is the option for build an image
type BuildOptions struct {
BuildArgs map[string]string
ContextDir string
File string
Iidfile string
Output []string
ProxyFlag bool
Tag string
}
// Builder is the object to build a Dockerfile
type Builder struct {
cliLog *logger.Logger
playbook *parser.PlayBook
buildID string
ctx context.Context
localStore store.Store
buildOpts BuildOptions
runtimePath string
dataDir string
runDir string
dockerfileDigest string
pipeWrapper *exporter.PipeWrapper
buildTime *time.Time
ignores []string
headingArgs map[string]string
reservedArgs map[string]string
unusedArgs map[string]string
stageBuilders []*stageBuilder
// stageAliasMap hold the stage index which has been renamed
// e.g. FROM foo AS bar -> map[string]int{"bar":1}
stageAliasMap map[string]int
}
// NewBuilder init a builder
func NewBuilder(ctx context.Context, store store.Store, req *pb.BuildRequest, runtimePath, buildDir, runDir string) (*Builder, error) {
b := &Builder{
ctx: ctx,
buildID: req.BuildID,
cliLog: logger.NewCliLogger(constant.CliLogBufferLen),
unusedArgs: make(map[string]string),
headingArgs: make(map[string]string),
reservedArgs: make(map[string]string),
localStore: store,
runtimePath: runtimePath,
dataDir: buildDir,
runDir: runDir,
}
args, err := b.parseBuildArgs(req.GetBuildArgs(), req.GetEncryptKey())
if err != nil {
return nil, errors.Wrap(err, "parse build-arg failed")
}
b.buildOpts = BuildOptions{
ContextDir: req.GetContextDir(),
File: req.GetFileContent(),
BuildArgs: args,
ProxyFlag: req.GetProxy(),
Iidfile: req.GetIidfile(),
}
b.parseStaticBuildOpts(req)
b.buildOpts.Tag = parseTag(req.Output)
// prepare workdirs for dockerfile builder
for _, dir := range []string{buildDir, runDir} {
if err = os.MkdirAll(dir, constant.DefaultRootDirMode); err != nil {
return nil, err
}
defer func(dir string) {
if err != nil {
if rerr := os.RemoveAll(dir); rerr != nil {
logrus.WithField(util.LogKeyBuildID, b.buildID).
Warnf("Removing dir in rollback failed: %v", rerr)
}
}
}(dir)
}
if err = b.parseOutput(req.Output); err != nil {
return nil, err
}
return b, nil
}
func (b *Builder) parseOutput(output string) error {
var (
pipeWrapper *exporter.PipeWrapper
err error
)
segments := strings.Split(output, ":")
expt := segments[0]
if util.IsClientExporter(expt) {
if pipeWrapper, err = exporter.NewPipeWrapper(b.runDir, expt); err != nil {
return err
}
// update the output path with pipe file to request
output = expt + ":" + pipeWrapper.PipeFile
if b.buildOpts.Tag != "" {
output = output + ":" + b.buildOpts.Tag
}
b.pipeWrapper = pipeWrapper
}
b.buildOpts.Output = []string{output}
return nil
}
// Logger adds the "buildID" attribute to build logs
func (b *Builder) Logger() *logrus.Entry {
return logrus.WithField(util.LogKeyBuildID, b.ctx.Value(util.LogFieldKey(util.LogKeyBuildID)))
}
func (b *Builder) parseBuildArgs(buildArgs []string, key string) (map[string]string, error) {
args := make(map[string]string, len(buildArgs))
for _, arg := range buildArgs {
if len(key) != 0 {
v, err := util.DecryptAES(arg, key)
if err != nil {
return nil, err
}
arg = v
}
kv := strings.SplitN(arg, "=", 2)
if len(kv) > 1 {
args[kv[0]] = kv[1]
}
}
return args, nil
}
func (b *Builder) parseStaticBuildOpts(req *pb.BuildRequest) {
if buildStatic := req.GetBuildStatic(); buildStatic != nil {
t := buildStatic.GetBuildTime()
if buildTime, err := time.Parse(time.RFC3339, t.String()); err == nil {
b.buildTime = &buildTime
}
}
}
func (b *Builder) parseFiles() error {
p, err := parser.NewParser(parser.DefaultParser)
if err != nil {
return errors.Wrap(err, "create parser failed")
}
srcHasher := digest.Canonical.Digester()
rc := bytes.NewBufferString(b.buildOpts.File)
reader := io.TeeReader(rc, srcHasher.Hash())
playbook, err := p.Parse(reader, false)
if err != nil {
return errors.Wrap(err, "parse dockerfile failed")
}
hash := srcHasher.Digest().String()
parts := strings.SplitN(hash, ":", 2)
b.dockerfileDigest = parts[1]
if playbook.Warnings != nil {
warn := fmt.Sprintf("Parse dockerfile got warnings: %v\n", playbook.Warnings)
b.Logger().Warnf(warn)
b.cliLog.Print(warn)
}
b.playbook = playbook
ignores, err := p.ParseIgnore(b.buildOpts.ContextDir)
if err != nil {
return errors.Wrap(err, "parse .dockerignore failed")
}
b.ignores = ignores
return nil
}
func (b *Builder) newStageBuilders() error {
var err error
// 1. analyze the ARGs before first FROM command
if err = b.usedHeadingArgs(); err != nil {
return errors.Wrapf(err, "resolve heading ARGs failed")
}
// 2. loop stages for analyzing FROM command and creating StageBuilders
b.stageAliasMap = make(map[string]int, len(b.playbook.Pages))
for stageIdx, stage := range b.playbook.Pages {
// new stage and analyze from command
sb := newStageBuilder(stageIdx, stage.Name)
if sb.fromImage, sb.fromStageIdx, err = analyzeFrom(stage.Lines[0], stageIdx, b.stageAliasMap, b.searchArg); err != nil {
return err
}
sb.rawStage = stage
sb.builder = b
sb.env = make(map[string]string)
sb.localStore = b.localStore
// get registry from "fromImage"
server, err := util.ParseServer(sb.fromImage)
if err != nil {
return err
}
sb.buildOpt.systemContext.DockerCertPath = filepath.Join(constant.DefaultCertRoot, server)
b.stageBuilders = append(b.stageBuilders, sb)
}
return nil
}
// usedHeadingArgs check heading args with inputted build-args
// if the HeadingArg without default value doesn't matched build-args, not effects in this building;
// if the HeadingArg with default value doesn't matched build-args, effects with default value;
// if the HeadingArg with default value matched build-args, effects with the value specified by build-args
func (b *Builder) usedHeadingArgs() error {
var (
buildArgs = util.CopyMapStringString(b.buildOpts.BuildArgs)
headingArgs = make(map[string]string, len(b.playbook.HeadingArgs))
reserved = make(map[string]string, len(constant.ReservedArgs))
resolveArg = func(s string) string {
if v, ok := headingArgs[s]; ok {
return v
}
return ""
}
)
for _, s := range b.playbook.HeadingArgs {
kv := strings.Split(s, "=")
// try word expansion for k in headingArgs. after resolved, replace it with new
k, err := dockerfile.ResolveParam(kv[0], false, resolveArg)
if err != nil {
return errors.Wrapf(err, "word expansion for heading ARG %q failed", kv[0])
}
buildArg, inBuildArgs := buildArgs[k]
if inBuildArgs {
// if this heading arg is activated by --build-arg, assign it to headingArgs
// and this buildArgs is used in this building, delete it from buildArgs (those not deleted are unusedArgs)
headingArgs[k] = buildArg
delete(buildArgs, k)
} else {
if len(kv) < 2 {
// this heading ARG doesn't have default value and not activated by build-arg, not use for this build
continue
}
// try word expansion for v in headingArgs
v, err := dockerfile.ResolveParam(kv[1], false, resolveArg)
if err != nil {
return errors.Wrapf(err, "word expansion for heading ARG %q failed", s)
}
headingArgs[k] = v
}
}
for k, v := range buildArgs {
if constant.ReservedArgs[k] {
reserved[k] = v
delete(buildArgs, k)
}
}
b.unusedArgs = util.CopyMapStringString(buildArgs)
b.reservedArgs = reserved
b.headingArgs = headingArgs
return nil
}
func (b *Builder) searchArg(arg string) string {
// supports the standard bash modifies as ${variable:-word} and ${variable:+word}
if strings.Contains(arg, ":-") {
subs := strings.Split(arg, ":-")
if v, exist := b.headingArgs[subs[0]]; exist {
delete(b.unusedArgs, arg)
return v
}
if len(subs) < 2 {
return ""
}
return subs[1]
}
if strings.Contains(arg, ":+") {
subs := strings.Split(arg, ":+")
if _, exist := b.headingArgs[subs[0]]; !exist || len(subs) < 2 {
return ""
}
return subs[1]
}
// only accepts heading args when parsing params in FROM command
if v, exist := b.headingArgs[arg]; exist {
delete(b.unusedArgs, arg)
return v
}
return ""
}
func analyzeFrom(line *parser.Line, stageIdx int, stageMap map[string]int, resolveArg func(string) string) (string, int, error) {
fromImage, err := image.ResolveImageName(line.Cells[0].Value, resolveArg)
if err != nil {
return "", 0, err
}
fromStageIdx := -1
if idx, exist := stageMap[fromImage]; exist {
fromStageIdx = idx
}
// if this command is form "FROM foo AS bar" (3 is length without command name FROM)
// which means this stage will be used later, mark it
if len(line.Cells) == 3 {
stageName := line.Cells[2].Value
stageMap[stageName] = stageIdx
}
return fromImage, fromStageIdx, nil
}
func getFlagsAndArgs(line *parser.Line, allowFlags map[string]bool) (map[string]string, []string) {
args := make([]string, 0, len(line.Cells))
for _, c := range line.Cells {
args = append(args, c.Value)
}
flags := make(map[string]string, len(line.Flags))
for flag, value := range line.Flags {
if _, ok := allowFlags[flag]; ok {
flags[flag] = value
}
}
return flags, args
}
// Build makes the image
func (b *Builder) Build() (string, error) {
var (
executeTimer = b.cliLog.StartTimer("\nTotal")
err error
imageID string
)
// 1. parseFiles
if err = b.parseFiles(); err != nil {
return "", err
}
// 2. pre-handle Playbook
if err = b.newStageBuilders(); err != nil {
return "", err
}
// 6. defer cleanup
defer func() {
b.cleanup()
}()
// 3. loop StageBuilders for building
for _, stage := range b.stageBuilders {
stageTimer := b.cliLog.StartTimer(fmt.Sprintf("Stage %d", stage.position))
// update FROM from name to imageID if it is based on previous stage
if idx := stage.fromStageIdx; idx != -1 {
stage.fromImage = b.stageBuilders[idx].imageID
}
imageID, err = stage.stageBuild(b.ctx)
b.cliLog.StopTimer(stageTimer)
b.Logger().Debugln(b.cliLog.GetCmdTime(stageTimer))
if err != nil {
b.Logger().Errorf("Builder[%s] build for stage[%s] failed for: %v", b.buildID, stage.name, err)
return "", errors.Wrapf(err, "building image for stage[%s] failed", stage.name)
}
}
// 4. export images
if err = b.export(imageID); err != nil {
return "", errors.Wrapf(err, "exporting images failed")
}
// 5. output imageID
if err = b.writeImageID(imageID); err != nil {
return "", errors.Wrapf(err, "writing image ID failed")
}
b.cliLog.StopTimer(executeTimer)
b.Logger().Debugf("Time Cost:\n%s", b.cliLog.Summary())
return imageID, nil
}
func (b *Builder) cleanup() {
// 1. warn user about the unused build-args if has
if len(b.unusedArgs) != 0 {
var unused []string
for k := range b.unusedArgs {
unused = append(unused, k)
}
sort.Strings(unused)
b.cliLog.Print("[Warning] One or more build-args %v were not consumed\n", unused)
}
// 2. cleanup the stage resources
for _, stage := range b.stageBuilders {
if err := stage.delete(); err != nil {
b.Logger().Warnf("Failed to cleanup stage resources for stage %q: %v", stage.name, err)
}
}
// 3. close channel for status
b.cliLog.CloseContent()
}
func (b *Builder) export(imageID string) error {
exportTimer := b.cliLog.StartTimer("EXPORT")
if b.buildOpts.Tag != "" {
if serr := b.localStore.SetNames(imageID, []string{b.buildOpts.Tag}); serr != nil {
return errors.Wrap(serr, "set tag for image error")
}
}
var retErr error
for _, o := range b.buildOpts.Output {
exOpts := exporter.ExportOptions{
Ctx: b.ctx,
SystemContext: image.GetSystemContext(),
ReportWriter: b.cliLog,
}
if exErr := exporter.Export(imageID, o, exOpts, b.localStore); exErr != nil {
b.Logger().Errorf("Image %s output to %s failed with: %v", imageID, o, exErr)
retErr = exErr
continue
}
b.Logger().Infof("Image %s output to %s completed", imageID, o)
}
b.cliLog.StopTimer(exportTimer)
b.Logger().Debugln(b.cliLog.GetCmdTime(exportTimer))
return retErr
}
func (b *Builder) writeImageID(imageID string) error {
if b.buildOpts.Iidfile != "" {
if err := ioutil.WriteFile(b.buildOpts.Iidfile, []byte(imageID), constant.DefaultRootFileMode); err != nil {
b.Logger().Errorf("Write image ID [%s] to file [%s] failed: %v", imageID, b.buildOpts.Iidfile, err)
return errors.Wrapf(err, "write image ID to file %s failed", b.buildOpts.Iidfile)
}
b.cliLog.Print("Write image ID [%s] to file: %s\n", imageID, b.buildOpts.Iidfile)
} else {
b.cliLog.Print("Build success with image id: %s\n", imageID)
}
return nil
}
// StatusChan return chan which contains build info of the builder
func (b *Builder) StatusChan() <-chan string {
return b.cliLog.GetContent()
}
// CleanResources removes data dir and run dir of builder, and returns the last removing error
func (b *Builder) CleanResources() error {
var err error
for _, dir := range []string{b.dataDir, b.runDir} {
if rerr := os.RemoveAll(dir); rerr != nil {
b.Logger().Errorf("Removing working dir %q failed: %v", dir, rerr)
err = rerr
}
}
return err
}
// OutputPipeWrapper returns the output pipe file path
func (b *Builder) OutputPipeWrapper() *exporter.PipeWrapper {
return b.pipeWrapper
}
func parseTag(output string) string {
outputFields := strings.Split(output, ":")
if (outputFields[0] == "docker-daemon" || outputFields[0] == "isulad") && len(outputFields) > 1 {
return strings.Join(outputFields[1:], ":")
}
const archiveOutputWithoutTagLen = 2
if outputFields[0] == "docker-archive" && len(outputFields) > archiveOutputWithoutTagLen {
if len(outputFields[archiveOutputWithoutTagLen:]) == 1 {
outputFields = append(outputFields, "latest")
}
return strings.Join(outputFields[archiveOutputWithoutTagLen:], ":")
}
if outputFields[0] == "docker" && len(outputFields) > 1 {
repoAndTag := strings.Join(outputFields[1:], ":")
// repo format regexp, "//registry.example.com/" for example
repo := regexp.MustCompile(`^\/\/[\w\.\-\:]+\/`).FindString(repoAndTag)
if repo == "" {
return ""
}
return repoAndTag[len(repo):]
}
return ""
}
此差异已折叠。
// Copyright (c) Huawei Technologies Co., Ltd. 2020. All rights reserved.
// isula-build licensed under the Mulan PSL v2.
// You can use this software according to the terms and conditions of the Mulan PSL v2.
// You may obtain a copy of Mulan PSL v2 at:
// http://license.coscl.org.cn/MulanPSL2
// THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR
// PURPOSE.
// See the Mulan PSL v2 for more details.
// Author: iSula Team
// Create: 2020-03-20
// Description: cmdBuilder related functions
package dockerfile
import (
"context"
"fmt"
"os"
"path"
"sort"
"strconv"
"strings"
"time"
securejoin "github.com/cyphar/filepath-securejoin"
"github.com/pkg/errors"
constant "isula.org/isula-build"
dockerfile "isula.org/isula-build/builder/dockerfile/parser"
"isula.org/isula-build/pkg/docker"
"isula.org/isula-build/pkg/parser"
"isula.org/isula-build/util"
)
var (
cmdExecutors map[string]func(cb *cmdBuilder) error
)
func init() {
cmdExecutors = map[string]func(cb *cmdBuilder) error{
dockerfile.Add: executeAdd,
dockerfile.Arg: executeNoop,
dockerfile.Copy: executeCopy,
dockerfile.Cmd: executeCmd,
dockerfile.Entrypoint: executeEntrypoint,
dockerfile.Env: executeEnv,
dockerfile.Expose: executeExpose,
dockerfile.Healthcheck: executeHealthCheck,
dockerfile.Label: executeLabel,
dockerfile.Maintainer: executeMaintainer,
dockerfile.OnBuild: executeOnbuild,
dockerfile.Run: executeRun,
dockerfile.Shell: executeShell,
dockerfile.Volume: executeVolume,
dockerfile.StopSignal: executeStopSignal,
dockerfile.User: executeUser,
dockerfile.WorkDir: executeWorkDir,
}
}
type cmdBuilder struct {
// stageBuilder of this command
stage *stageBuilder
// line contents parsed by parser
line *parser.Line
// context of current builder
ctx context.Context
// envs is from stage builder envs
envs map[string]string
// args passed parameters from build-time
args map[string]string
// the args of the commands
cmdArgs []string
// flags for this command
cmdFlags map[string]string
}
// NewCmdBuilder init a CmdBuilder
func newCmdBuilder(ctx context.Context, line *parser.Line, s *stageBuilder, stageArgs, stageEnvs map[string]string) *cmdBuilder {
return &cmdBuilder{
ctx: ctx,
stage: s,
args: util.CopyMapStringString(stageArgs),
envs: util.CopyMapStringString(stageEnvs),
line: line,
}
}
// allowWordExpand these commands supports word expansion.
// note: ARG and ENV are expanded before by stage builder
var allowWordExpand = map[string]bool{
dockerfile.Add: true,
dockerfile.Copy: true,
dockerfile.Expose: true,
dockerfile.Label: true,
dockerfile.StopSignal: true,
dockerfile.User: true,
dockerfile.Volume: true,
dockerfile.WorkDir: true,
}
func (c *cmdBuilder) cmdExecutor() error {
var err error
if _, ok := cmdExecutors[c.line.Command]; !ok {
return errors.Errorf("command [%s %s] not supported for executing, please check!", c.line.Command, c.line.Raw)
}
cmdInfo := fmt.Sprintf("%s %s", c.line.Command, c.line.Raw)
logInfo := fmt.Sprintf("%s %d-%d", c.line.Command, c.line.Begin, c.line.End)
c.stage.builder.cliLog.StepPrint(cmdInfo)
logTimer := c.stage.builder.cliLog.StartTimer(logInfo)
if allowWordExpand[c.line.Command] {
if err = c.wordExpansion(); err != nil {
return err
}
}
c.stage.builder.Logger().Infof("Executing line %d command %s", c.line.Begin, c.line.Command)
err = cmdExecutors[c.line.Command](c)
c.stage.builder.cliLog.StopTimer(logTimer)
c.stage.builder.Logger().Debugln(c.stage.builder.cliLog.GetCmdTime(logTimer))
return err
}
func (c *cmdBuilder) wordExpansion() error {
resolveArg := func(s string) string {
c.stage.builder.Logger().Debugf("Resolve Param handling for %s", s)
if v, ok := c.envs[s]; ok {
return v
}
if v, ok := c.args[s]; ok {
return v
}
return ""
}
for i, cell := range c.line.Cells {
val, err := dockerfile.ResolveParam(cell.Value, false, resolveArg)
if err != nil {
c.stage.builder.Logger().
Errorf("Word expansion for line %d command %s failed: %v", c.line.Begin, c.line.Command, err)
return errors.Wrapf(err, "word expansion for %s at line %d failed", c.line.Command, c.line.Begin)
}
c.line.Cells[i].Value = val
}
return nil
}
// FROM/ARG were pre-analyzed by stage builder before
// noop here just for step printing
func executeNoop(cb *cmdBuilder) error {
return nil
}
func executeCopy(cb *cmdBuilder) error {
return cb.executeAddAndCopy(false)
}
func executeAdd(cb *cmdBuilder) error {
return cb.executeAddAndCopy(true)
}
func executeHealthCheck(cb *cmdBuilder) error {
// the default value is referenced from https://docs.docker.com/engine/reference/builder/#healthcheck
var (
allFlags = map[string]string{
dockerfile.HealthCheckStartPeriod: "0s",
dockerfile.HealthCheckInterval: "30s",
dockerfile.HealthCheckTimeout: "30s",
dockerfile.HealthCheckRetries: "3",
}
durationFlags = map[string]time.Duration{
dockerfile.HealthCheckStartPeriod: 0,
dockerfile.HealthCheckInterval: 30 * time.Second,
dockerfile.HealthCheckTimeout: 30 * time.Second,
}
)
// cb.cmdArgs has at lease 1 arg, which has already checked at parser
checkType := cb.cmdArgs[0]
// process NONE type
if strings.ToUpper(checkType) == healthCheckTestDisable {
cb.stage.docker.Config.Healthcheck = &docker.HealthConfig{
Test: []string{healthCheckTestDisable},
}
return nil
}
const minCmdArgs = 2
if len(cb.cmdArgs) < minCmdArgs {
return errors.Errorf("args invalid: %v, HEALTHCHECK must have at least two args", cb.cmdArgs)
}
argv := cb.cmdArgs[1:]
healthcheck := docker.HealthConfig{}
if cb.line.IsJSONArgs() {
healthcheck.Test = append(healthcheck.Test, checkType)
healthcheck.Test = append(healthcheck.Test, argv...)
} else {
// if not json cmd, rewrite checkType to 'CMD-SHELL'
checkType = healthCheckTestTypeShell
healthcheck.Test = []string{checkType, strings.Join(argv, " ")}
}
for flag, defaultValue := range allFlags {
if _, ok := cb.cmdFlags[flag]; !ok {
cb.cmdFlags[flag] = defaultValue
}
}
for flag := range durationFlags {
value := cb.cmdFlags[flag]
d, err := time.ParseDuration(value)
if err != nil {
return err
}
durationFlags[flag] = d
}
healthcheck.StartPeriod = durationFlags["start-period"]
healthcheck.Interval = durationFlags["interval"]
healthcheck.Timeout = durationFlags["timeout"]
retries, err := strconv.Atoi(cb.cmdFlags["retries"])
if err != nil {
return err
}
healthcheck.Retries = retries
cb.stage.docker.Config.Healthcheck = &healthcheck
return nil
}
func executeCmd(cb *cmdBuilder) error {
var cmdLine []string
if cb.line.IsJSONArgs() {
_, cmdLine = getFlagsAndArgs(cb.line, make(map[string]bool))
} else {
cmdLine = append(cb.stage.shellForm, cb.line.Cells[0].Value) // nolint:gocritic
}
cb.stage.docker.Config.Cmd = cmdLine
return nil
}
func executeShell(cb *cmdBuilder) error {
_, cmdLine := getFlagsAndArgs(cb.line, make(map[string]bool))
cb.stage.shellForm = cmdLine
cb.stage.docker.Config.Shell = cmdLine
return nil
}
func executeRun(cb *cmdBuilder) error {
var cmdLine []string
if cb.line.IsJSONArgs() {
_, cmdLine = getFlagsAndArgs(cb.line, make(map[string]bool))
} else {
cmdLine = append(cb.stage.shellForm, cb.line.Cells[0].Value) // nolint:gocritic
}
return cb.Run(cmdLine)
}
func executeEntrypoint(cb *cmdBuilder) error {
var entrypoint []string
if cb.line.IsJSONArgs() {
_, entrypoint = getFlagsAndArgs(cb.line, make(map[string]bool))
} else {
entrypoint = append(cb.stage.shellForm, cb.line.Cells[0].Value) // nolint:gocritic
}
cb.stage.docker.Config.Entrypoint = entrypoint
return nil
}
// ENV was pre-analyzed by stage builder before
// here just add to config
func executeEnv(cb *cmdBuilder) error {
var envs []string
for k, v := range cb.envs {
envs = append(envs, k+"="+v)
}
sort.Strings(envs)
cb.stage.docker.Config.Env = envs
return nil
}
// ONBUILD was pre-analyzed by stage builder before
// here just add to config
func executeOnbuild(cb *cmdBuilder) error {
cb.stage.docker.Config.OnBuild = append(cb.stage.docker.Config.OnBuild, cb.line.Raw)
return nil
}
func executeVolume(cb *cmdBuilder) error {
if cb.stage.docker.Config.Volumes == nil {
cb.stage.docker.Config.Volumes = make(map[string]struct{}, len(cb.line.Cells))
}
for _, cell := range cb.line.Cells {
if cell.Value != "" {
cb.stage.docker.Config.Volumes[cell.Value] = struct{}{}
}
}
if len(cb.stage.docker.Config.Volumes) == 0 {
return errors.New("no specified dirs in VOLUME")
}
return nil
}
func executeLabel(cb *cmdBuilder) error {
if cb.stage.docker.Config.Labels == nil {
cb.stage.docker.Config.Labels = make(map[string]string, len(cb.line.Cells))
}
for _, cell := range cb.line.Cells {
kv := strings.Split(cell.Value, "=")
if len(kv) < 2 {
return errors.Errorf("%q is not a valid label", cell.Value)
}
cb.stage.docker.Config.Labels[kv[0]] = kv[1]
}
return nil
}
func executeWorkDir(cb *cmdBuilder) error {
var (
origDir = cb.line.Cells[0].Value
workDir = origDir
)
if !path.IsAbs(workDir) {
workDir = path.Join(string(os.PathSeparator), cb.stage.docker.Config.WorkingDir, workDir)
}
p, err := securejoin.SecureJoin(cb.stage.mountpoint, workDir)
if err != nil {
return errors.Wrapf(err, "failed to secure join workdir %q", origDir)
}
_, err = os.Stat(p)
if err != nil {
if !os.IsNotExist(err) {
return errors.Wrapf(err, "invalid container path %q", origDir)
}
// this workdir is created in rootfs, so the dir perm mode should be shared
if err = os.MkdirAll(p, constant.DefaultSharedDirMode); err != nil {
return errors.Wrapf(err, "failed to create container path %q", origDir)
}
}
cb.stage.docker.Config.WorkingDir = workDir
return nil
}
func executeMaintainer(cb *cmdBuilder) error {
maintainer := cb.line.Cells[0].Value
cb.stage.docker.Author = maintainer
return nil
}
func executeStopSignal(cb *cmdBuilder) error {
if _, err := util.ValidateSignal(cb.line.Cells[0].Value); err != nil {
return err
}
cb.stage.docker.Config.StopSignal = cb.line.Cells[0].Value
return nil
}
func executeUser(cb *cmdBuilder) error {
user := cb.line.Cells[0].Value
cb.stage.docker.Config.User = user
return nil
}
func executeExpose(cb *cmdBuilder) error {
if cb.stage.docker.Config.ExposedPorts == nil {
cb.stage.docker.Config.ExposedPorts = make(docker.PortSet, len(cb.line.Cells))
}
for _, cell := range cb.line.Cells {
p, err := util.PortSet(cell.Value)
if err != nil {
return err
}
cb.stage.docker.Config.ExposedPorts[docker.Port(p)] = struct{}{}
}
return nil
}
// Copyright (c) Huawei Technologies Co., Ltd. 2020. All rights reserved.
// isula-build licensed under the Mulan PSL v2.
// You can use this software according to the terms and conditions of the Mulan PSL v2.
// You may obtain a copy of Mulan PSL v2 at:
// http://license.coscl.org.cn/MulanPSL2
// THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR
// PURPOSE.
// See the Mulan PSL v2 for more details.
// Author: Feiyu Yang
// Create: 2020-03-20
// Description: commit related functions
package dockerfile
import (
"context"
"encoding/json"
"io"
"strings"
cp "github.com/containers/image/v5/copy"
"github.com/containers/image/v5/docker/reference"
"github.com/containers/image/v5/signature"
is "github.com/containers/image/v5/storage"
"github.com/containers/image/v5/transports"
"github.com/containers/storage/pkg/stringid"
"github.com/pkg/errors"
transc "isula.org/isula-build/builder/dockerfile/container"
"isula.org/isula-build/image"
"isula.org/isula-build/util"
)
func newImageCopyOptions(reportWriter io.Writer) *cp.Options {
return &cp.Options{
ReportWriter: reportWriter,
SourceCtx: image.GetSystemContext(),
DestinationCtx: image.GetSystemContext(),
}
}
func getPolicyContext() (*signature.PolicyContext, error) {
systemContext := image.GetSystemContext()
systemContext.DirForceCompress = true
commitPolicy, err := signature.DefaultPolicy(systemContext)
if err != nil {
return nil, errors.Wrap(err, "failed to get the default policy by new system context")
}
commitPolicy.Transports[is.Transport.Name()] = signature.PolicyTransportScopes{
"": []signature.PolicyRequirement{
signature.NewPRInsecureAcceptAnything(),
},
}
return signature.NewPolicyContext(commitPolicy)
}
func (c *cmdBuilder) newContainerReference(exporting bool) (transc.Reference, error) {
var name reference.Named
container, err := c.stage.localStore.Container(c.stage.containerID)
if err != nil {
return transc.Reference{}, errors.Wrapf(err, "error locating container %q", c.stage.containerID)
}
if len(container.Names) > 0 {
if parsed, err2 := reference.ParseNamed(container.Names[0]); err2 == nil {
name = parsed
}
}
dconfig, err := json.Marshal(&c.stage.docker)
if err != nil {
return transc.Reference{}, errors.Wrapf(err, "error encoding docker-format image configuration %#v", c.stage.docker)
}
createdBy := strings.Join(util.CopyStrings(c.stage.docker.Config.Shell), " ")
if createdBy == "" {
createdBy = defaultShell
}
metadata := &transc.ReferenceMetadata{
Name: name,
CreatedBy: createdBy,
Dconfig: dconfig,
// container id used in the image has no meaning here,
// so we use dockerfileDigest to fill it for distinguishing whether an image is
// built from the same dockerfile
ContainerID: c.stage.builder.dockerfileDigest,
BuildTime: c.stage.builder.buildTime,
LayerID: container.LayerID,
}
result := transc.NewContainerReference(c.stage.localStore, metadata, exporting)
return result, nil
}
func (c *cmdBuilder) isFromImageExist(storeT is.StoreTransport) bool {
fromImageID := c.stage.fromImageID
if fromImageID == "" {
return false
}
ref, err := storeT.ParseReference(fromImageID)
if ref == nil || err != nil {
return false
}
if img, err := storeT.GetImage(ref); img != nil && err == nil {
return true
}
return false
}
func (c *cmdBuilder) commit(ctx context.Context) (string, error) {
commitTimer := c.stage.builder.cliLog.StartTimer("COMMIT")
tmpName := stringid.GenerateRandomID() + "-commit-tmp"
dest, err := is.Transport.ParseStoreReference(c.stage.localStore, tmpName)
if err != nil {
return "", errors.Wrapf(err, "failed to create ref using %q", tmpName)
}
// if fromImage exist in store, set exporting false to avoid store fromImage again
exporting := true
if storeTransport, ok := dest.Transport().(is.StoreTransport); ok {
exporting = !c.isFromImageExist(storeTransport)
}
policyContext, err := getPolicyContext()
if err != nil {
return "", err
}
c.stage.builder.Logger().Debugf("CmdBuilder commit %q gets CommitPolicyContext OK", tmpName)
defer func() {
if derr := policyContext.Destroy(); derr != nil {
c.stage.builder.Logger().Warningf("Destroy commit policy context failed: %v", derr)
}
}()
// New a container image ref for copying
srcContainerReference, err := c.newContainerReference(exporting)
if err != nil {
return "", errors.Wrapf(err, "failed to create container image ref for container %q", c.stage.containerID)
}
imageCopyOptions := newImageCopyOptions(c.stage.builder.cliLog)
if _, err = cp.Image(ctx, policyContext, dest, &srcContainerReference, imageCopyOptions); err != nil {
return "", errors.Wrapf(err, "error copying layers and metadata for container %q", c.stage.containerID)
}
img, err := is.Transport.GetStoreImage(c.stage.localStore, dest)
if err != nil {
return "", errors.Wrapf(err, "error locating image %q in local storage", transports.ImageName(dest))
}
// Remove tmp name
newNames := util.CopyStringsWithoutSpecificElem(img.Names, tmpName)
if err = c.stage.localStore.SetNames(img.ID, newNames); err != nil {
return img.ID, errors.Wrapf(err, "failed to prune temporary name from image %q", img.ID)
}
c.stage.builder.Logger().Debugf("Reassigned names %v to image %q", newNames, img.ID)
// Update the dest ref
_, err = is.Transport.ParseStoreReference(c.stage.localStore, "@"+img.ID)
if err != nil {
return img.ID, errors.Wrapf(err, "failed to create ref using %q", img.ID)
}
c.stage.builder.cliLog.StopTimer(commitTimer)
c.stage.builder.Logger().Debugln(c.stage.builder.cliLog.GetCmdTime(commitTimer))
c.stage.builder.cliLog.Print("Committed stage %s with ID: %s\n", c.stage.name, img.ID)
return img.ID, nil
}
此差异已折叠。
// Copyright (c) Huawei Technologies Co., Ltd. 2020. All rights reserved.
// isula-build licensed under the Mulan PSL v2.
// You can use this software according to the terms and conditions of the Mulan PSL v2.
// You may obtain a copy of Mulan PSL v2 at:
// http://license.coscl.org.cn/MulanPSL2
// THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR
// PURPOSE.
// See the Mulan PSL v2 for more details.
// Author: iSula Team
// Create: 2020-03-20
// Description: dockerfile related constants
package dockerfile
const (
noBaseImage = "scratch"
defaultPathEnv = "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
defaultShell = "/bin/sh"
// refer to pkg/docker/types.go, following are HealthConfig Test type options
// "NONE": disable healthcheck
// "CMD-SHELL": run command with system's default shell
healthCheckTestDisable = "NONE"
healthCheckTestTypeShell = "CMD-SHELL"
)
// Copyright (c) Huawei Technologies Co., Ltd. 2020. All rights reserved.
// isula-build licensed under the Mulan PSL v2.
// You can use this software according to the terms and conditions of the Mulan PSL v2.
// You may obtain a copy of Mulan PSL v2 at:
// http://license.coscl.org.cn/MulanPSL2
// THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR
// PURPOSE.
// See the Mulan PSL v2 for more details.
// Author: Zekun Liu
// Create: 2020-03-20
// Description: container image source related functions
package container
import (
"bytes"
"context"
"io"
"io/ioutil"
"os"
"path/filepath"
"github.com/containers/image/v5/types"
"github.com/containers/storage"
"github.com/containers/storage/pkg/archive"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
constant "isula.org/isula-build"
)
type containerImageSource struct {
ref *Reference
path string
containerID string
layerID string
manifestType string
config []byte
manifest []byte
store storage.Store
compression archive.Compression
configDigest digest.Digest
exporting bool
}
// Close removes the blob directory associated with the containerImageSource
func (i *containerImageSource) Close() error {
err := os.RemoveAll(i.path)
if err != nil {
return errors.Wrapf(err, "remove the layer's blob directory %q failed", i.path)
}
return nil
}
// Reference returns the reference used to set up this source
func (i *containerImageSource) Reference() types.ImageReference {
return i.ref
}
// GetSignatures used to get the image's signatures, but containerImageSource not
// support to list it
func (i *containerImageSource) GetSignatures(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
if instanceDigest != nil {
return nil, errors.Errorf("containerImageSource does not support to list the signatures")
}
return nil, nil
}
// GetManifest returns the image's manifest along with its MIME type
func (i *containerImageSource) GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error) {
if instanceDigest != nil {
return nil, "", errors.Errorf("containerImageSource does not support list the manifest")
}
return i.manifest, i.manifestType, nil
}
// LayerInfosForCopy always return nil here meaning the values in the manifest are fine
func (i *containerImageSource) LayerInfosForCopy(ctx context.Context, instanceDigest *digest.Digest) ([]types.BlobInfo, error) {
return nil, nil
}
// HasThreadSafeGetBlob always return nil here indicates the GetBlob can not be executed concurrently
func (i *containerImageSource) HasThreadSafeGetBlob() bool {
return false
}
// GetBlob returns a stream for the specified blob, and the blob’s size
func (i *containerImageSource) GetBlob(ctx context.Context, blob types.BlobInfo, _ types.BlobInfoCache) (io.ReadCloser, int64, error) {
if blob.Digest == i.configDigest {
reader := bytes.NewReader(i.config)
return ioutil.NopCloser(reader), reader.Size(), nil
}
blobFile := filepath.Join(i.path, blob.Digest.String())
st, err := os.Stat(blobFile)
if err != nil && os.IsNotExist(err) {
return nil, -1, errors.Wrapf(err, "blob file %q is not exit", blobFile)
}
layerFile, err := os.OpenFile(blobFile, os.O_RDONLY, constant.DefaultRootFileMode)
if err != nil {
return nil, -1, errors.Wrapf(err, "open the blob file %q failed", blobFile)
}
return layerFile, st.Size(), nil
}
// Copyright (c) Huawei Technologies Co., Ltd. 2020. All rights reserved.
// isula-build licensed under the Mulan PSL v2.
// You can use this software according to the terms and conditions of the Mulan PSL v2.
// You may obtain a copy of Mulan PSL v2 at:
// http://license.coscl.org.cn/MulanPSL2
// THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR
// PURPOSE.
// See the Mulan PSL v2 for more details.
// Author: Zekun Liu
// Create: 2020-03-20
// Description: container image source related functions tests
package container
import (
"context"
"os"
"testing"
"github.com/containers/image/v5/docker/reference"
"github.com/containers/image/v5/manifest"
"github.com/containers/image/v5/types"
"github.com/opencontainers/go-digest"
"gotest.tools/assert"
"gotest.tools/fs"
)
func TestClose(t *testing.T) {
cis := containerImageSource{
path: fs.NewDir(t, "blob").Path(),
}
cis.Close()
_, err := os.Stat(cis.path)
assert.ErrorContains(t, err, "no such file or directory")
}
func TestReference(t *testing.T) {
var name reference.Named
metadata := &ReferenceMetadata{
Name: name,
CreatedBy: "isula",
Dconfig: []byte("isula-builder"),
ContainerID: "e6587b2dbfd56b5ce2e64dd7933ba04886bff86836dec5f09ce59d599df012fe",
LayerID: "dacfba0cd5c0d28f33d41fb9a9c8bf2b0c53689da136aeba6dfecf347125fa23",
}
imageRef := NewContainerReference(localStore, metadata, false)
cis := containerImageSource{
ref: &imageRef,
}
r := cis.Reference()
transport := r.StringWithinTransport()
assert.Equal(t, transport, "container")
}
func TestGetSignatures(t *testing.T) {
type testcase struct {
name string
digest *digest.Digest
manifest []byte
manifestType string
isErr bool
errStr string
}
d := digest.SHA256.FromString("isula")
var testcases = []testcase{
{
name: "with digest",
digest: &d,
isErr: true,
},
{
name: "with nil digest",
},
}
for _, tc := range testcases {
cis := containerImageSource{}
signature, err := cis.GetSignatures(context.TODO(), tc.digest)
assert.Equal(t, err != nil, tc.isErr, tc.name)
if err != nil {
assert.ErrorContains(t, err, "not support to list the signatures")
}
if err == nil {
assert.DeepEqual(t, signature, [][]uint8(nil))
}
}
}
func TestGetManifest(t *testing.T) {
type testcase struct {
name string
digest *digest.Digest
manifest []byte
manifestType string
isErr bool
errStr string
}
d := digest.SHA256.FromString("isula")
var testcases = []testcase{
{
name: "with digest",
digest: &d,
isErr: true,
}, {
name: "with nil digest",
},
}
for _, tc := range testcases {
cis := containerImageSource{
manifest: []byte("6d47a9873783f7bf23773f0cf60c67cef295d451f56b8b79fe3a1ea217a4bf98"),
manifestType: manifest.DockerV2Schema2MediaType,
}
manifest, manifestType, err := cis.GetManifest(context.TODO(), tc.digest)
assert.Equal(t, err != nil, tc.isErr, tc.name)
if err != nil {
assert.ErrorContains(t, err, "not support list the manifest")
}
if err == nil {
assert.Equal(t, string(manifest), string(cis.manifest))
assert.Equal(t, manifestType, cis.manifestType)
}
}
}
func TestLayerInfosForCopy(t *testing.T) {
cis := containerImageSource{
manifest: []byte("6d47a9873783f7bf23773f0cf60c67cef295d451f56b8b79fe3a1ea217a4bf98"),
manifestType: manifest.DockerV2Schema2MediaType,
}
info, err := cis.LayerInfosForCopy(context.TODO(), nil)
assert.NilError(t, err)
assert.DeepEqual(t, info, []types.BlobInfo(nil))
}
func TestHasThreadSafeGetBlob(t *testing.T) {
cis := containerImageSource{}
b := cis.HasThreadSafeGetBlob()
assert.Equal(t, b, false)
}
func TestGetBlob(t *testing.T) {
type testcase struct {
name string
digestStr string
hasBlobFile bool
isErr bool
errStr string
expectSize int64
}
var testcases = []testcase{
{
name: "digest equal",
digestStr: "digest equal",
expectSize: 12,
},
{
name: "digest is not equal and blob file not exist",
digestStr: "digest",
isErr: true,
errStr: "no such file or directory",
},
{
name: "has blob file",
digestStr: "digest",
hasBlobFile: true,
expectSize: 12,
},
}
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
d := digest.SHA256.FromString(tc.name)
cis := containerImageSource{
configDigest: d,
config: []byte(tc.name),
}
blob := types.BlobInfo{
Digest: digest.SHA256.FromString(tc.digestStr),
}
if tc.hasBlobFile {
dirCtx := fs.NewDir(t, t.Name(), fs.WithFile(blob.Digest.String(), "blob-content"))
cis.path = dirCtx.Path()
defer dirCtx.Remove()
}
_, size, err := cis.GetBlob(context.TODO(), blob, nil)
assert.Equal(t, err != nil, tc.isErr, tc.name)
if err != nil {
assert.ErrorContains(t, err, tc.errStr)
}
if err == nil {
assert.Equal(t, tc.expectSize, size, tc.name)
}
})
}
}
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
FROM image
# escape=`
LABEL maintainer foo@isula.com
# directive after comment
# escape=`
FROM image
LABEL maintainer foo@isula.com
# unknowndirective=value
# escape=`
FROM image
LABEL maintainer foo@isula.com
# Comment here. Should not be looking for the following parser directive.
# Hence the following line will be ignored, and the subsequent backslash
# continuation will be the default.
# escape = `
FROM image
LABEL maintainer foo@isula.com
# es \
cape=`
FROM image
LABEL maintainer foo@isula.com
# escape=`
# escape=`
FROM image
LABEL maintainer foo@isula.com
# escape=``
FROM image
LABEL maintainer foo@isula.com
# escape=\\
FROM image
LABEL maintainer foo@isula.com
# escape=isula
FROM image
LABEL maintainer foo@isula.com
# escape = ``
# There is no white space line after the directives. This still succeeds, but goes
# against best practices.
FROM image
LABEL maintainer foo@isula.com
# escape = `
FROM image
LABEL maintainer foo@isula.com
#escape = `
FROM image
LABEL maintainer foo@isula.com
# escape = \
# There is no white space line after the directives. This still succeeds, but goes
# against best practices.
FROM image
LABEL maintainer foo@isula.com
此差异已折叠。
#escape=\\
FROM scratch
ADD busybox.tar.xz /
CMD ["sh"]
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册