未验证 提交 1e26bb0c 编写于 作者: T Thomas Strömberg 提交者: GitHub

Merge branch 'master' into download

# Minikube Release Notes
## Version 1.0.0 - 2019-03-27
* Update default Kubernetes version to v1.14.0 [#3967](https://github.com/kubernetes/minikube/pull/3967)
* NOTE: To avoid interaction issues, we also recommend updating kubectl to a recent release (v1.13+)
* Upgrade addon-manager to v9.0 for compatibility with Kubernetes v1.14 [#3984](https://github.com/kubernetes/minikube/pull/3984)
* Add --image-repository flag so that users can select an alternative repository mirror [#3714](https://github.com/kubernetes/minikube/pull/3714)
* Rename MINIKUBE_IN_COLOR to MINIKUBE_IN_STYLE [#3976](https://github.com/kubernetes/minikube/pull/3976)
* mount: Allow names to be passed in for gid/uid [#3989](https://github.com/kubernetes/minikube/pull/3989)
* mount: unmount on sigint/sigterm, add --options and --mode, improve UI [#3855](https://github.com/kubernetes/minikube/pull/3855)
* --extra-config now work for kubeadm as well [#3879](https://github.com/kubernetes/minikube/pull/3879)
* start: Set the default value of --cache to true [#3917](https://github.com/kubernetes/minikube/pull/3917)
* Remove the swap partition from minikube.iso [#3927](https://github.com/kubernetes/minikube/pull/3927)
* Add solution catalog to help users who run into known problems [#3931](https://github.com/kubernetes/minikube/pull/3931)
* Automatically propagate proxy environment variables to docker env [#3834](https://github.com/kubernetes/minikube/pull/3834)
* More reliable unmount w/ SIGINT, particularly on kvm2 [#3985](https://github.com/kubernetes/minikube/pull/3985)
* Remove arch suffixes in image names [#3942](https://github.com/kubernetes/minikube/pull/3942)
* Issue #3253, improve kubernetes-version error string [#3596](https://github.com/kubernetes/minikube/pull/3596)
* Update kubeadm bootstrap logic so it does not wait for addon-manager [#3958](https://github.com/kubernetes/minikube/pull/3958)
* Add explicit kvm2 flag for hidden KVM signature [#3947](https://github.com/kubernetes/minikube/pull/3947)
* Remove the rkt container runtime [#3944](https://github.com/kubernetes/minikube/pull/3944)
* Store the toolbox on the disk instead of rootfs [#3951](https://github.com/kubernetes/minikube/pull/3951)
* fix CHANGE_MINIKUBE_NONE_USER regression from recent changes [#3875](https://github.com/kubernetes/minikube/pull/3875)
* Do not wait for k8s-app pods when starting with CNI [#3896](https://github.com/kubernetes/minikube/pull/3896)
* Replace server name in updateKubeConfig if --apiserver-name exists #3878 [#3897](https://github.com/kubernetes/minikube/pull/3897)
* feature-gates via minikube config set [#3861](https://github.com/kubernetes/minikube/pull/3861)
* Upgrade crio to v1.13.1, skip install.tools target as it isn't necessary [#3919](https://github.com/kubernetes/minikube/pull/3919)
* Update Ingress-NGINX to 0.23 Release [#3877](https://github.com/kubernetes/minikube/pull/3877)
* Add addon-manager, dashboard, and storage-provisioner to minikube logs [#3982](https://github.com/kubernetes/minikube/pull/3982)
* logs: Add kube-proxy, dmesg, uptime, uname + newlines between log sources [#3872](https://github.com/kubernetes/minikube/pull/3872)
* Skip "pull" command if using Kubernetes 1.10, which does not support it. [#3832](https://github.com/kubernetes/minikube/pull/3832)
* Allow building minikube for any architecture [#3887](https://github.com/kubernetes/minikube/pull/3887)
* Windows installer using installation path for x64 applications [#3895](https://github.com/kubernetes/minikube/pull/3895)
* caching: Fix containerd, improve console messages, add integration tests [#3767](https://github.com/kubernetes/minikube/pull/3767)
* Fix `minikube addons open heapster` [#3826](https://github.com/kubernetes/minikube/pull/3826)
We couldn't have gotten here without the folks who contributed to this release:
- Anders F Björklund
- Andy Daniels
- Calin Don
- Cristian Măgherușan-Stanciu @magheru_san
- Dmitry Budaev
- Guang Ya Liu
- Igor Akkerman
- Joel Smith
- Marco Vito Moscaritolo
- Marcos Diez
- Martynas Pumputis
- RA489
- Sharif Elgamal
- Steven Davidovitz
- Thomas Strömberg
- Zhongcheng Lao
- flyingcircle
- jay vyas
- morvencao
- u5surf
We all stand on the shoulders of the giants who came before us. A special shout-out to all [813 people who have contributed to minikube](https://github.com/kubernetes/minikube/graphs/contributors), and especially our former maintainers who made minikube into what it is today:
- Matt Rickard
- Dan Lorenc
- Aaron Prindle
## Version 0.35.0 - 2019-03-06
* Update default Kubernetes version to v1.13.4 (latest stable) [#3807](https://github.com/kubernetes/minikube/pull/3807)
......
......@@ -505,6 +505,14 @@
pruneopts = "NUT"
revision = "4d0e916071f68db74f8a73926335f809396d6b42"
[[projects]]
digest = "1:0028cb19b2e4c3112225cd871870f2d9cf49b9b4276531f03438a88e94be86fe"
name = "github.com/pmezard/go-difflib"
packages = ["difflib"]
pruneopts = "NUT"
revision = "792786c7400a136282c1664665ae0a8db921c6c2"
version = "v1.0.0"
[[projects]]
branch = "master"
digest = "1:1b6f62a965e4b2e004184bf2d38ef2915af240befa4d44e5f0e83925bcf89727"
......@@ -1008,6 +1016,7 @@
"github.com/pkg/browser",
"github.com/pkg/errors",
"github.com/pkg/profile",
"github.com/pmezard/go-difflib/difflib",
"github.com/r2d4/external-storage/lib/controller",
"github.com/sirupsen/logrus",
"github.com/spf13/cobra",
......
......@@ -13,8 +13,8 @@
# limitations under the License.
# Bump these on release - and please check ISO_VERSION for correctness.
VERSION_MAJOR ?= 0
VERSION_MINOR ?= 35
VERSION_MAJOR ?= 1
VERSION_MINOR ?= 0
VERSION_BUILD ?= 0
# Default to .0 for higher cache hit rates, as build increments typically don't require new ISO versions
ISO_VERSION ?= v$(VERSION_MAJOR).$(VERSION_MINOR).0
......
......@@ -15,17 +15,15 @@
minikube implements a local Kubernetes cluster on macOS, Linux, and Windows.
<img src="https://github.com/kubernetes/minikube/raw/master/images/start.jpg" width="800">
![screenshot](/images/start.jpg)
Our [goal](https://github.com/kubernetes/minikube/blob/master/docs/contributors/principles.md) is to enable fast local development and to support all Kubernetes features that fit. We hope you enjoy it!
Our [project goals](https://github.com/kubernetes/minikube/blob/master/docs/contributors/principles.md) are to enable fast local development and to support all Kubernetes features that fit. We hope you enjoy it!
## News
* 2019-03-27 - v1.0.0 released! [[download](https://github.com/kubernetes/minikube/releases/tag/v1.0.0)] [[release notes](https://github.com/kubernetes/minikube/blob/master/CHANGELOG.md#version-1000---2019-03-27)]
* 2019-03-06 - v0.35.0 released! [[download](https://github.com/kubernetes/minikube/releases/tag/v0.35.0)] [[release notes](https://github.com/kubernetes/minikube/blob/master/CHANGELOG.md#version-0350---2019-03-06)]
* 2019-02-16 - v0.34.1 released! [[download](https://github.com/kubernetes/minikube/releases/tag/v0.34.1)] [[release notes](https://github.com/kubernetes/minikube/blob/master/CHANGELOG.md#version-0341---2019-02-16)]
* 2019-02-15 - v0.34.0 released! [[download](https://github.com/kubernetes/minikube/releases/tag/v0.34.0)] [[release notes](https://github.com/kubernetes/minikube/blob/master/CHANGELOG.md#version-0340---2019-02-15)]
* 2019-01-18 - v0.33.1 released to address [CVE-2019-5736](https://www.openwall.com/lists/oss-security/2019/02/11/2) [[download](https://github.com/kubernetes/minikube/releases/tag/v0.33.1)] [[release notes](https://github.com/kubernetes/minikube/blob/master/CHANGELOG.md#version-0331---2019-01-18)]
* 2019-01-17 - v0.33.0 released! [[download](https://github.com/kubernetes/minikube/releases/tag/v0.33.0)] [[release notes](https://github.com/kubernetes/minikube/blob/master/CHANGELOG.md#version-0330---2019-01-17)]
## Features
......@@ -56,6 +54,8 @@ As well as developer-friendly features:
## Community
![Help Wanted!](/images/help_wanted.jpg)
minikube is a Kubernetes [#sig-cluster-lifecycle](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle) project.
* [**#minikube on Kubernetes Slack**](https://kubernetes.slack.com) - Live chat with minikube developers!
......
......@@ -105,8 +105,10 @@ func GenerateBashCompletion(w io.Writer, cmd *cobra.Command) error {
// GenerateZshCompletion generates the completion for the zsh shell
func GenerateZshCompletion(out io.Writer, cmd *cobra.Command) error {
zshInitialization := `#compdef minikube
zshAutoloadTag := `#compdef minikube
`
zshInitialization := `
__minikube_bash_source() {
alias shopt=':'
alias _expand=_bash_expand
......@@ -239,7 +241,12 @@ __minikube_convert_bash_to_zsh() {
<<'BASH_COMPLETION_EOF'
`
_, err := out.Write([]byte(boilerPlate))
_, err := out.Write([]byte(zshAutoloadTag))
if err != nil {
return err
}
_, err = out.Write([]byte(boilerPlate))
if err != nil {
return err
}
......
......@@ -45,8 +45,8 @@ var mountIP string
var mountVersion string
var mountType string
var isKill bool
var uid int
var gid int
var uid string
var gid string
var mSize int
var options []string
var mode uint
......@@ -98,6 +98,7 @@ var mountCmd = &cobra.Command{
}
defer api.Close()
host, err := api.Load(config.GetMachineName())
if err != nil {
exit.WithError("Error loading api", err)
}
......@@ -144,8 +145,8 @@ var mountCmd = &cobra.Command{
console.OutStyle("mounting", "Mounting host path %s into VM as %s ...", hostPath, vmPath)
console.OutStyle("mount-options", "Mount options:")
console.OutStyle("option", "Type: %s", cfg.Type)
console.OutStyle("option", "UID: %d", cfg.UID)
console.OutStyle("option", "GID: %d", cfg.GID)
console.OutStyle("option", "UID: %s", cfg.UID)
console.OutStyle("option", "GID: %s", cfg.GID)
console.OutStyle("option", "Version: %s", cfg.Version)
console.OutStyle("option", "MSize: %d", cfg.MSize)
console.OutStyle("option", "Mode: %o (%s)", cfg.Mode, cfg.Mode)
......@@ -163,22 +164,32 @@ var mountCmd = &cobra.Command{
go func() {
console.OutStyle("fileserver", "Userspace file server: ")
ufs.StartServer(net.JoinHostPort(ip.String(), strconv.Itoa(port)), debugVal, hostPath)
console.OutStyle("stopped", "Userspace file server is shutdown")
wg.Done()
}()
}
// Use CommandRunner, as the native docker ssh service dies when Ctrl-C is received.
runner, err := machine.CommandRunner(host)
if err != nil {
exit.WithError("Failed to get command runner", err)
}
// Unmount if Ctrl-C or kill request is received.
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt, syscall.SIGTERM)
go func() {
for sig := range c {
console.OutStyle("unmount", "Unmounting %s ...", vmPath)
cluster.Unmount(host, vmPath)
err := cluster.Unmount(runner, vmPath)
if err != nil {
console.ErrStyle("failure", "Failed unmount: %v", err)
}
exit.WithCode(exit.Interrupted, "Exiting due to %s signal", sig)
}
}()
err = cluster.Mount(host, ip.String(), vmPath, cfg)
err = cluster.Mount(runner, ip.String(), vmPath, cfg)
if err != nil {
exit.WithError("mount failed", err)
}
......@@ -194,8 +205,8 @@ func init() {
mountCmd.Flags().StringVar(&mountType, "type", nineP, "Specify the mount filesystem type (supported types: 9p)")
mountCmd.Flags().StringVar(&mountVersion, "9p-version", constants.DefaultMountVersion, "Specify the 9p version that the mount should use")
mountCmd.Flags().BoolVar(&isKill, "kill", false, "Kill the mount process spawned by minikube start")
mountCmd.Flags().IntVar(&uid, "uid", 1001, "Default user id used for the mount")
mountCmd.Flags().IntVar(&gid, "gid", 1001, "Default group id used for the mount")
mountCmd.Flags().StringVar(&uid, "uid", "docker", "Default user id used for the mount")
mountCmd.Flags().StringVar(&gid, "gid", "docker", "Default group id used for the mount")
mountCmd.Flags().UintVar(&mode, "mode", 0755, "File permissions used for the mount")
mountCmd.Flags().StringSliceVar(&options, "options", []string{}, "Additional mount options, such as cache=fscache")
mountCmd.Flags().IntVar(&mSize, "msize", constants.DefaultMsize, "The number of bytes to use for 9p packet payload")
......
......@@ -136,8 +136,8 @@ func init() {
startCmd.Flags().String(networkPlugin, "", "The name of the network plugin")
startCmd.Flags().Bool(enableDefaultCNI, false, "Enable the default CNI plugin (/etc/cni/net.d/k8s.conf). Used in conjunction with \"--network-plugin=cni\"")
startCmd.Flags().String(featureGates, "", "A set of key=value pairs that describe feature gates for alpha/experimental features.")
startCmd.Flags().Bool(cacheImages, true, "If true, cache docker images for the current bootstrapper and load them into the machine.")
startCmd.Flags().Bool(downloadOnly, false, "If true, only download and cache files for later use - don't install or start anything.")
startCmd.Flags().Bool(cacheImages, true, "If true, cache docker images for the current bootstrapper and load them into the machine. Always false with --vm-driver=none.")
startCmd.Flags().Var(&extraOptions, "extra-config",
`A set of key=value pairs that describe configuration that may be passed to different components.
The key should be '.' separated, and the first part before the dot is the component to apply the configuration to.
......@@ -176,6 +176,11 @@ func runStart(cmd *cobra.Command, args []string) {
exit.WithError("Failed to generate config", err)
}
if viper.GetString(vmDriver) == constants.DriverNone {
// Optimization: images will be persistently loaded into the host's container runtime, so no need to duplicate work.
viper.Set(cacheImages, false)
}
var cacheGroup errgroup.Group
beginCacheImages(&cacheGroup, k8sVersion)
......@@ -219,13 +224,23 @@ func runStart(cmd *cobra.Command, args []string) {
}
cr := configureRuntimes(host, runner)
// prepareHostEnvironment uses the downloaded images, so we need to wait for background task completion.
if viper.GetBool(cacheImages) {
console.OutStyle("waiting", "Waiting for image downloads to complete ...")
if err := cacheGroup.Wait(); err != nil {
glog.Errorln("Error caching images: ", err)
}
}
bs := prepareHostEnvironment(m, config.KubernetesConfig)
waitCacheImages(&cacheGroup)
// The kube config must be update must come before bootstrapping, otherwise health checks may use a stale IP
kubeconfig := updateKubeConfig(host, &config)
bootstrapCluster(bs, cr, runner, config.KubernetesConfig, preexisting)
validateCluster(bs, cr, runner, ip)
apiserverPort := config.KubernetesConfig.NodePort
validateCluster(bs, cr, runner, ip, apiserverPort)
configureMounts()
if err = LoadCachedImagesInConfigFile(); err != nil {
console.Failure("Unable to load cached images from config file.")
......@@ -534,17 +549,6 @@ func configureRuntimes(h *host.Host, runner bootstrapper.CommandRunner) cruntime
return cr
}
// waitCacheImages blocks until the image cache jobs complete
func waitCacheImages(g *errgroup.Group) {
if !viper.GetBool(cacheImages) {
return
}
console.OutStyle("waiting", "Waiting for image downloads to complete ...")
if err := g.Wait(); err != nil {
glog.Errorln("Error caching images: ", err)
}
}
// bootstrapCluster starts Kubernetes using the chosen bootstrapper
func bootstrapCluster(bs bootstrapper.Bootstrapper, r cruntime.Manager, runner bootstrapper.CommandRunner, kc cfg.KubernetesConfig, preexisting bool) {
console.OutStyle("pulling", "Pulling images required by Kubernetes %s ...", kc.KubernetesVersion)
......@@ -569,7 +573,7 @@ func bootstrapCluster(bs bootstrapper.Bootstrapper, r cruntime.Manager, runner b
}
// validateCluster validates that the cluster is well-configured and healthy
func validateCluster(bs bootstrapper.Bootstrapper, r cruntime.Manager, runner bootstrapper.CommandRunner, ip string) {
func validateCluster(bs bootstrapper.Bootstrapper, r cruntime.Manager, runner bootstrapper.CommandRunner, ip string, apiserverPort int) {
console.OutStyle("verifying-noline", "Verifying component health ...")
k8sStat := func() (err error) {
st, err := bs.GetKubeletStatus()
......@@ -584,7 +588,7 @@ func validateCluster(bs bootstrapper.Bootstrapper, r cruntime.Manager, runner bo
exit.WithLogEntries("kubelet checks failed", err, logs.FindProblems(r, bs, runner))
}
aStat := func() (err error) {
st, err := bs.GetAPIServerStatus(net.ParseIP(ip))
st, err := bs.GetAPIServerStatus(net.ParseIP(ip), apiserverPort)
console.Out(".")
if err != nil || st != state.Running.String() {
return &pkgutil.RetriableError{Err: fmt.Errorf("apiserver status=%s err=%v", st, err)}
......
......@@ -92,7 +92,13 @@ var statusCmd = &cobra.Command{
glog.Errorln("Error host driver ip status:", err)
}
apiserverSt, err = clusterBootstrapper.GetAPIServerStatus(ip)
apiserverPort, err := pkgutil.GetPortFromKubeConfig(util.GetKubeConfigPath(), config.GetMachineName())
if err != nil {
// Fallback to presuming default apiserver port
apiserverPort = pkgutil.APIServerPort
}
apiserverSt, err = clusterBootstrapper.GetAPIServerStatus(ip, apiserverPort)
if err != nil {
glog.Errorln("Error apiserver status:", err)
} else if apiserverSt != state.Running.String() {
......
......@@ -20,7 +20,6 @@ CONFIG_RT_GROUP_SCHED=y
CONFIG_CGROUP_PIDS=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_HUGETLB=y
CONFIG_CGROUP_NET_PRIO=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
......@@ -114,6 +113,7 @@ CONFIG_NETFILTER=y
CONFIG_NETFILTER_NETLINK_ACCT=y
CONFIG_NETFILTER_NETLINK_QUEUE=y
CONFIG_NF_CONNTRACK=m
CONFIG_NF_CONNTRACK_ZONES=y
CONFIG_NF_CONNTRACK_EVENTS=y
CONFIG_NF_CONNTRACK_TIMEOUT=y
CONFIG_NF_CONNTRACK_TIMESTAMP=y
......@@ -123,8 +123,6 @@ CONFIG_NF_CONNTRACK_SANE=m
CONFIG_NF_CONNTRACK_SIP=m
CONFIG_NF_CONNTRACK_TFTP=m
CONFIG_NF_CT_NETLINK=m
CONFIG_NF_SOCKET_IPV4=m
CONFIG_NF_SOCKET_IPV6=m
CONFIG_NETFILTER_XT_SET=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
......@@ -229,6 +227,7 @@ CONFIG_IP_VS_SED=m
CONFIG_IP_VS_NQ=m
CONFIG_IP_VS_NFCT=y
CONFIG_NF_CONNTRACK_IPV4=m
CONFIG_NF_SOCKET_IPV4=m
CONFIG_NF_LOG_ARP=m
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_FILTER=y
......@@ -240,6 +239,7 @@ CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_MANGLE=y
CONFIG_IP_NF_RAW=m
CONFIG_NF_CONNTRACK_IPV6=m
CONFIG_NF_SOCKET_IPV6=m
CONFIG_IP6_NF_IPTABLES=y
CONFIG_IP6_NF_MATCH_IPV6HEADER=y
CONFIG_IP6_NF_FILTER=y
......@@ -277,6 +277,7 @@ CONFIG_NET_ACT_BPF=m
CONFIG_OPENVSWITCH=m
CONFIG_VSOCKETS=m
CONFIG_VIRTIO_VSOCKETS=m
CONFIG_CGROUP_NET_PRIO=y
CONFIG_BPF_JIT=y
CONFIG_HAMRADIO=y
CONFIG_CFG80211=y
......@@ -428,7 +429,6 @@ CONFIG_RTC_CLASS=y
# CONFIG_RTC_HCTOSYS is not set
CONFIG_DMADEVICES=y
CONFIG_VIRT_DRIVERS=y
CONFIG_VBOXGUEST=m
CONFIG_VIRTIO_PCI=y
CONFIG_HYPERV=m
CONFIG_HYPERV_UTILS=m
......
......@@ -7,7 +7,7 @@ menu "System tools"
source "$BR2_EXTERNAL_MINIKUBE_PATH/package/docker-bin/Config.in"
source "$BR2_EXTERNAL_MINIKUBE_PATH/package/cni-bin/Config.in"
source "$BR2_EXTERNAL_MINIKUBE_PATH/package/cni-plugins-bin/Config.in"
source "$BR2_EXTERNAL_MINIKUBE_PATH/package/hv-kvp-daemon/Config.in"
source "$BR2_EXTERNAL_MINIKUBE_PATH/package/hyperv-daemons/Config.in"
source "$BR2_EXTERNAL_MINIKUBE_PATH/package/gluster/Config.in"
source "$BR2_EXTERNAL_MINIKUBE_PATH/package/vbox-guest/Config.in"
source "$BR2_EXTERNAL_MINIKUBE_PATH/package/containerd-bin/Config.in"
......
......@@ -7,3 +7,4 @@ sha256 692e1c72937f6214b1038def84463018d8e320c8eaf8530546c84c2f8f9c767d docker-
sha256 1270dce1bd7e1838d62ae21d2505d87f16efc1d9074645571daaefdfd0c14054 docker-17.12.1-ce.tgz
sha256 83be159cf0657df9e1a1a4a127d181725a982714a983b2bdcc0621244df93687 docker-18.06.1-ce.tgz
sha256 a979d9a952fae474886c7588da692ee00684cb2421d2c633c7ed415948cf0b10 docker-18.06.2-ce.tgz
sha256 346f9394393ee8db5f8bd1e229ee9d90e5b36931bdd754308b2ae68884dd6822 docker-18.06.3-ce.tgz
......@@ -4,7 +4,7 @@
#
################################################################################
DOCKER_BIN_VERSION = 18.06.2-ce
DOCKER_BIN_VERSION = 18.06.3-ce
DOCKER_BIN_SITE = https://download.docker.com/linux/static/stable/x86_64
DOCKER_BIN_SOURCE = docker-$(DOCKER_BIN_VERSION).tgz
......
################################################################################
#
# hv-kvp-daemon
#
################################################################################
HV_KVP_DAEMON_VERSION = 4.4.27
HV_KVP_DAEMON_SITE = https://www.kernel.org/pub/linux/kernel/v${HV_KVP_DAEMON_VERSION%%.*}.x
HV_KVP_DAEMON_SOURCE = linux-$(HV_KVP_DAEMON_VERSION).tar.xz
define HV_KVP_DAEMON_BUILD_CMDS
$(MAKE) $(TARGET_CONFIGURE_OPTS) -C $(@D)/tools/hv/
endef
define HV_KVP_DAEMON_INSTALL_TARGET_CMDS
$(INSTALL) -D -m 0755 \
$(@D)/tools/hv/hv_kvp_daemon \
$(TARGET_DIR)/usr/sbin/hv_kvp_daemon
endef
define HV_KVP_DAEMON_INSTALL_INIT_SYSTEMD
$(INSTALL) -D -m 644 \
$(BR2_EXTERNAL_MINIKUBE_PATH)/package/hv-kvp-daemon/hv_kvp_daemon.service \
$(TARGET_DIR)/usr/lib/systemd/system/hv_kvp_daemon.service
ln -fs /usr/lib/systemd/system/hv_kvp_daemon.service \
$(TARGET_DIR)/etc/systemd/system/multi-user.target.wants/hv_kvp_daemon.service
endef
$(eval $(generic-package))
SUBSYSTEM=="misc", KERNEL=="vmbus/hv_fcopy", TAG+="systemd", ENV{SYSTEMD_WANTS}+="hv_fcopy_daemon.service"
SUBSYSTEM=="misc", KERNEL=="vmbus/hv_kvp", TAG+="systemd", ENV{SYSTEMD_WANTS}+="hv_kvp_daemon.service"
SUBSYSTEM=="misc", KERNEL=="vmbus/hv_vss", TAG+="systemd", ENV{SYSTEMD_WANTS}+="hv_vss_daemon.service"
config BR2_PACKAGE_HV_KVP_DAEMON
bool "hv-kvp-daemon"
config BR2_PACKAGE_HYPERV_DAEMONS
bool "hyperv-daemons"
default y
depends on BR2_x86_64
[Unit]
Description=Hyper-V FCOPY Daemon
Documentation=https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/reference/integration-services#hyper-v-guest-service-interface
BindsTo=sys-devices-virtual-misc-vmbus\x21hv_fcopy.device
[Service]
ExecStart=/usr/sbin/hv_fcopy_daemon -n
[Install]
WantedBy=multi-user.target
[Unit]
Description=Hyper-V Key Value Pair Daemon
Documentation=https://technet.microsoft.com/en-us/library/dn798287(v=ws.11).aspx
ConditionVirtualization=microsoft
Documentation=https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/reference/integration-services#hyper-v-data-exchange-service-kvp
BindsTo=sys-devices-virtual-misc-vmbus\x21hv_kvp.device
[Service]
Type=simple
Restart=always
RestartSec=3
ExecStart=/usr/sbin/hv_kvp_daemon -n
[Install]
......
[Unit]
Description=Hyper-V VSS Daemon
Documentation=https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/reference/integration-services#hyper-v-volume-shadow-copy-requestor
BindsTo=sys-devices-virtual-misc-vmbus\x21hv_vss.device
[Service]
ExecStart=/usr/sbin/hv_vss_daemon -n
[Install]
WantedBy=multi-user.target
################################################################################
#
# hyperv-daemons
#
################################################################################
HYPERV_DAEMONS_VERSION = 4.15.1
HYPERV_DAEMONS_SITE = https://www.kernel.org/pub/linux/kernel/v${HYPERV_DAEMONS_VERSION%%.*}.x
HYPERV_DAEMONS_SOURCE = linux-$(HYPERV_DAEMONS_VERSION).tar.xz
define HYPERV_DAEMONS_BUILD_CMDS
$(MAKE) $(TARGET_CONFIGURE_OPTS) -C $(@D)/tools/hv/
endef
define HYPERV_DAEMONS_INSTALL_TARGET_CMDS
$(INSTALL) -D -m 0755 \
$(@D)/tools/hv/hv_fcopy_daemon \
$(TARGET_DIR)/usr/sbin/hv_fcopy_daemon
$(INSTALL) -D -m 0755 \
$(@D)/tools/hv/hv_kvp_daemon \
$(TARGET_DIR)/usr/sbin/hv_kvp_daemon
$(INSTALL) -D -m 0755 \
$(@D)/tools/hv/hv_get_dhcp_info.sh \
$(TARGET_DIR)/usr/libexec/hypervkvpd/hv_get_dhcp_info
$(INSTALL) -D -m 0755 \
$(@D)/tools/hv/hv_get_dns_info.sh \
$(TARGET_DIR)/usr/libexec/hypervkvpd/hv_get_dns_info
$(INSTALL) -D -m 0755 \
$(@D)/tools/hv/hv_set_ifconfig.sh \
$(TARGET_DIR)/usr/libexec/hypervkvpd/hv_set_ifconfig
$(INSTALL) -D -m 0755 \
$(@D)/tools/hv/hv_vss_daemon \
$(TARGET_DIR)/usr/sbin/hv_vss_daemon
endef
define HYPERV_DAEMONS_INSTALL_INIT_SYSTEMD
$(INSTALL) -D -m 644 \
$(BR2_EXTERNAL_MINIKUBE_PATH)/package/hyperv-daemons/70-hv_fcopy.rules \
$(TARGET_DIR)/etc/udev/rules.d/70-hv_fcopy.rules
$(INSTALL) -D -m 644 \
$(BR2_EXTERNAL_MINIKUBE_PATH)/package/hyperv-daemons/70-hv_kvp.rules \
$(TARGET_DIR)/etc/udev/rules.d/70-hv_kvp.rules
$(INSTALL) -D -m 644 \
$(BR2_EXTERNAL_MINIKUBE_PATH)/package/hyperv-daemons/70-hv_vss.rules \
$(TARGET_DIR)/etc/udev/rules.d/70-hv_vss.rules
$(INSTALL) -D -m 644 \
$(BR2_EXTERNAL_MINIKUBE_PATH)/package/hyperv-daemons/hv_fcopy_daemon.service \
$(TARGET_DIR)/usr/lib/systemd/system/hv_fcopy_daemon.service
$(INSTALL) -D -m 644 \
$(BR2_EXTERNAL_MINIKUBE_PATH)/package/hyperv-daemons/hv_kvp_daemon.service \
$(TARGET_DIR)/usr/lib/systemd/system/hv_kvp_daemon.service
$(INSTALL) -D -m 644 \
$(BR2_EXTERNAL_MINIKUBE_PATH)/package/hyperv-daemons/hv_vss_daemon.service \
$(TARGET_DIR)/usr/lib/systemd/system/hv_vss_daemon.service
ln -fs /usr/lib/systemd/system/hv_fcopy_daemon.service \
$(TARGET_DIR)/etc/systemd/system/multi-user.target.wants/hv_fcopy_daemon.service
ln -fs /usr/lib/systemd/system/hv_kvp_daemon.service \
$(TARGET_DIR)/etc/systemd/system/multi-user.target.wants/hv_kvp_daemon.service
ln -fs /usr/lib/systemd/system/hv_vss_daemon.service \
$(TARGET_DIR)/etc/systemd/system/multi-user.target.wants/hv_vss_daemon.service
endef
$(eval $(generic-package))
[
{
"name": "v1.0.0",
"checksums": {
"darwin": "865bd3a13c1ad3b7732b2bea35b26fef150f2b3cbfc257c5d1835527d1b331e9",
"linux": "a315869f81aae782ecc6ff2a6de4d0ab3a17ca1840d1d8e6eea050a8dd05907f",
"windows": "a9e629911498ce774681504abe1797c1957e29d100d40c80c26ac54e22716a85"
}
},
{
"name": "v0.35.0",
"checksums": {
......
......@@ -41,3 +41,5 @@
* **Accessing etcd from inside the cluster** ([accessing_etcd.md](accessing_etcd.md))
* **Networking** ([networking.md](networking.md)): FAQ about networking between the host and minikube VM
* **Offline** ([offline.md](offline.md)): Details about using minikube offline
......@@ -78,6 +78,10 @@ This step uses the git tag to publish new binaries to GCS and create a github re
* For `ISO_SHA256`, run: `gsutil cat gs://minikube/iso/minikube-v<version>.iso.sha256`
* Click *Build*
## Check the release logs
Once the release completes, click "Console Output" to look or anything unusual. This is typically where you will see the brew automation fail, for instance.
## Check releases.json
This file is used for auto-update notifications, but is not active until releases.json is copied to GCS.
......@@ -93,6 +97,8 @@ These are downstream packages that are being maintained by others and how to upg
| Arch Linux AUR | <https://aur.archlinux.org/packages/minikube/> | "Flag as package out-of-date"
| Brew Cask | <https://github.com/Homebrew/homebrew-cask/blob/master/Casks/minikube.rb> | The release job creates a new PR in [Homebrew/homebrew-cask](https://github.com/Homebrew/homebrew-cask) with an updated version and SHA256, double check that it's created.
WARNING: The Brew cask automation is error-prone. please ensure that a PR was created.
## Verification
Verify release checksums by running`make check-release`
......
......@@ -6,45 +6,45 @@ Please send a PR to suggest any improvements to it.
## (#1) User-friendly and accessible
- Creation of a user-centric minikube website for installation & documentation
- Localized output to 5+ written languages
- Make minikube usable in environments with challenging connectivity requirements
- Support lightweight deployment methods for environments where VM's are impractical
- Add offline support
- [ ] Creation of a user-centric minikube website for installation & documentation
- [ ] Localized output to 5+ written languages
- [ ] Make minikube usable in environments with challenging connectivity requirements
- [ ] Support lightweight deployment methods for environments where VM's are impractical
- [x] Add offline support
## (#2) Inclusive and community-driven
- Increase community involvement in planning and decision making
- Make the continuous integration and release infrastructure publicly available
- Double the number of active maintainers
- [x] Increase community involvement in planning and decision making
- [ ] Make the continuous integration and release infrastructure publicly available
- [x] Double the number of active maintainers
## (#3) Cross-platform
- Simplified installation process across all supported platforms
- Users should never need to separately install supporting binaries
- [ ] Simplified installation process across all supported platforms
- [ ] Users should never need to separately install supporting binaries
## (#4) Support all Kubernetes features
- Add multi-node support
- [ ] Add multi-node support
## (#5) High-fidelity
- Reduce guest VM overhead by 50%
- Disable swap in the guest VM
- [ ] Reduce guest VM overhead by 50%
- [x] Disable swap in the guest VM
## (#6) Compatible with all supported Kubernetes releases
- Continuous Integration testing across all supported Kubernetes releases
- Automatic PR generation for updating the default Kubernetes release minikube uses
- [x] Continuous Integration testing across all supported Kubernetes releases
- [ ] Automatic PR generation for updating the default Kubernetes release minikube uses
## (#7) Support for all Kubernetes-friendly container runtimes
- Run all integration tests across all supported container runtimes
- Support for Kata Containers (help wanted!)
- [x] Run all integration tests across all supported container runtimes
- [ ] Support for Kata Containers (help wanted!)
## (#8) Stable and easy to debug
- Pre-flight error checks for common connectivity and configuration errors
- Improve the `minikube status` command so that it can diagnose common issues
- Mark all features not covered by continuous integration as `experimental`
- Stabilize and improve profiles support (AKA multi-cluster)
- [ ] Pre-flight error checks for common connectivity and configuration errors
- [ ] Improve the `minikube status` command so that it can diagnose common issues
- [ ] Mark all features not covered by continuous integration as `experimental`
- [x] Stabilize and improve profiles support (AKA multi-cluster)
......@@ -95,30 +95,19 @@ minikube start
## Hyperkit driver
The Hyperkit driver will eventually replace the existing xhyve driver.
It is built from the minikube source tree, and uses [moby/hyperkit](http://github.com/moby/hyperkit) as a Go library.
To install the hyperkit driver via brew:
Install the [hyperkit](http://github.com/moby/hyperkit) VM manager using [brew](https://brew.sh):
```shell
brew install docker-machine-driver-hyperkit
# docker-machine-driver-hyperkit need root owner and uid
sudo chown root:wheel /usr/local/opt/docker-machine-driver-hyperkit/bin/docker-machine-driver-hyperkit
sudo chmod u+s /usr/local/opt/docker-machine-driver-hyperkit/bin/docker-machine-driver-hyperkit
brew install hyperkit
```
To install the hyperkit driver manually:
Then install the most recent version of minikube's fork of the hyperkit driver:
```shell
curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-hyperkit \
&& sudo install -o root -g wheel -m 4755 docker-machine-driver-hyperkit /usr/local/bin/
```
The hyperkit driver currently requires running as root to use the vmnet framework to setup networking.
If you encountered errors like `Could not find hyperkit executable`, you might need to install [Docker for Mac](https://store.docker.com/editions/community/docker-ce-desktop-mac)
If you are using [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc.html) in your setup and cluster creation fails (stuck at kube-dns initialization) you might need to add `listen-address=192.168.64.1` to `dnsmasq.conf`.
*Note: If `dnsmasq.conf` contains `listen-address=127.0.0.1` kubernetes discovers dns at 127.0.0.1:53 and tries to use it using bridge ip address, but dnsmasq replies only to requests from 127.0.0.1*
......@@ -129,18 +118,12 @@ To use the driver:
minikube start --vm-driver hyperkit
```
or, to use hyperkit as a default driver:
or, to use hyperkit as a default driver for minikube:
```shell
minikube config set vm-driver hyperkit
```
and run minikube as usual:
```shell
minikube start
```
## HyperV driver
Hyper-v users may need to create a new external network switch as described [here](https://docs.docker.com/machine/drivers/hyper-v/). This step may prevent a problem in which `minikube start` hangs indefinitely, unable to ssh into the minikube virtual machine. In this add, add the `--hyperv-virtual-switch=switch-name` argument to the `minikube start` command.
......
......@@ -13,7 +13,7 @@ Some features can only be accessed by environment variables, here is a list of t
* **MINIKUBE_HOME** - (string) sets the path for the .minikube directory that minikube uses for state/configuration
* **MINIKUBE_IN_COLOR** - (bool) manually sets whether or not emoji and colors should appear in minikube. Set to false or 0 to disable this feature, true or 1 to force it to be turned on.
* **MINIKUBE_IN_STYLE** - (bool) manually sets whether or not emoji and colors should appear in minikube. Set to false or 0 to disable this feature, true or 1 to force it to be turned on.
* **MINIKUBE_WANTUPDATENOTIFICATION** - (bool) sets whether the user wants an update notification for new minikube versions
......@@ -34,7 +34,7 @@ To make the exported variables permanent:
### Example: Disabling emoji
```shell
export MINIKUBE_IN_COLOR=false
export MINIKUBE_IN_STYLE=false
minikube start
```
......
# Offline support in minikube
As of v1.0, `minikube start` is offline compatible out of the box. Here are some implementation details to help systems integrators:
## Requirements
* On the initial run for a given Kubernetes release, `minikube start` must have access to the internet, or a configured `--image-repository` to pull from.
## Cache location
* `~/.minikube/cache` - Top-level folder
* `~/.minikube/cache/iso` - VM ISO image. Typically updated once per major minikube release.
* `~/.minikube/cache/images` - Docker images used by Kubernetes.
* `~/.minikube/<version>` - Kubernetes binaries, such as `kubeadm` and `kubelet`
## Sharing the minikube cache
For offline use on other hosts, one can copy the contents of `~/.minikube/cache`. As of the v1.0 release, this directory
contains 685MB of data:
```
cache/iso/minikube-v1.0.0.iso
cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1
cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13
cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13
cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1
cache/images/k8s.gcr.io/kube-scheduler_v1.14.0
cache/images/k8s.gcr.io/coredns_1.3.1
cache/images/k8s.gcr.io/kube-controller-manager_v1.14.0
cache/images/k8s.gcr.io/kube-apiserver_v1.14.0
cache/images/k8s.gcr.io/pause_3.1
cache/images/k8s.gcr.io/etcd_3.3.10
cache/images/k8s.gcr.io/kube-addon-manager_v9.0
cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13
cache/images/k8s.gcr.io/kube-proxy_v1.14.0
cache/v1.14.0/kubeadm
cache/v1.14.0/kubelet
```
If any of these files exist, minikube will use copy them into the VM directly rather than pulling them from the internet.
......@@ -92,7 +92,7 @@ Some environment variables may be useful for using the `none` driver:
* **MINIKUBE_HOME**: Saves all files to this directory instead of $HOME
* **MINIKUBE_WANTUPDATENOTIFICATION**: Toggles the notification that your version of minikube is obsolete
* **MINIKUBE_WANTREPORTERRORPROMPT**: Toggles the error reporting prompt
* **MINIKUBE_IN_COLOR**: Toggles color output and emoji usage
* **MINIKUBE_IN_STYLE**: Toggles color output and emoji usage
## Known Issues
......@@ -101,5 +101,12 @@ Some environment variables may be useful for using the `none` driver:
* minikube with the `none` driver has a confusing permissions model, as some commands need to be run as root ("start"), and others by a regular user ("dashboard")
* CoreDNS detects resolver loop, goes into CrashloopBackoff - [#3511](https://github.com/kubernetes/minikube/issues/3511)
* Some versions of Linux have a version of docker that is newer then what Kubernetes expects. To overwrite this, run minikube with the following parameters: `sudo -E minikube start --vm-driver=none --kubernetes-version v1.11.8 --extra-config kubeadm.ignore-preflight-errors=SystemVerification`
* On Ubuntu 18.04 (and probably others), because of how `systemd-resolve` is configured by default, one needs to bypass the default `resolv.conf` file and use a different one instead: `sudo -E minikube --vm-driver=none start --extra-config=kubelet.resolv-conf=/run/systemd/resolve/resolv.conf`
* On Ubuntu 18.04 (and probably others), because of how `systemd-resolve` is configured by default, one needs to bypass the default `resolv.conf` file and use a different one instead.
- In this case, you should use this file: `/run/systemd/resolve/resolv.conf`
- `sudo -E minikube --vm-driver=none start --extra-config=kubelet.resolv-conf=/run/systemd/resolve/resolv.conf`
- Apperently, though, if `resolve.conf` is too big (about 10 lines!!!), one gets the following error: `Waiting for pods: apiserver proxy! Error restarting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition`
- This error happens in Kubernetes 0.11.x, 0.12.x and 0.13.x, but *not* in 0.14.x
- If that's your case, try this:
- `grep -E "^nameserver" /run/systemd/resolve/resolv.conf |head -n 3 > /tmp/resolv.conf && sudo -E minikube --vm-driver=none start --extra-config=kubelet.resolv-conf=/tmp/resolv.conf`
* [Full list of open 'none' driver issues](https://github.com/kubernetes/minikube/labels/co%2Fnone-driver)
#!/bin/sh
# Copyright 2019 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script executes the Kubernetes conformance tests in accordance with:
# https://github.com/cncf/k8s-conformance/blob/master/instructions.md
#
# Usage:
# conformance_tests.sh <path to minikube> <flags>
#
# Example:
# conformance_tests.sh ./out/minikube --vm-driver=hyperkit
set -ex -o pipefail
readonly PROFILE_NAME="k8sconformance"
readonly MINIKUBE=${1:-./out/minikube}
shift || true
readonly START_ARGS=$*
# Requires a fully running Kubernetes cluster.
"${MINIKUBE}" delete -p "${PROFILE_NAME}" || true
"${MINIKUBE}" start -p "${PROFILE_NAME}" $START_ARGS
"${MINIKUBE}" status -p "${PROFILE_NAME}"
kubectl get pods --all-namespaces
go get -u -v github.com/heptio/sonobuoy
sonobuoy run --wait
outdir="$(mktemp -d)"
sonobuoy retrieve "${outdir}"
cwd=$(pwd)
cd "${outdir}"
mkdir ./results; tar xzf *.tar.gz -C ./results
version=$(${MINIKUBE} version | cut -d" " -f3)
mkdir minikube-${version}
cd minikube-${version}
cat <<EOF >PRODUCT.yaml
vendor: minikube
name: minikube
version: ${version}
website_url: https://github.com/kubernetes/minikube
repo_url: https://github.com/kubernetes/minikube
documentation_url: https://github.com/kubernetes/minikube/blob/master/docs/README.md
product_logo_url: https://raw.githubusercontent.com/kubernetes/minikube/master/images/logo/logo.svg
type: installer
description: minikube runs a local Kubernetes cluster on macOS, Linux, and Windows.
EOF
cat <<EOF >README.md
./hack/conformance_tests.sh $MINIKUBE $START_ARGS
EOF
cp ../results/plugins/e2e/results/* .
cd ..
cp -r minikube-${version} ${cwd}
......@@ -63,10 +63,10 @@ curl -Lo minikube https://storage.googleapis.com/minikube/releases/${TAGNAME}/mi
Feel free to leave off \`\`\`sudo cp minikube /usr/local/bin/ && rm minikube\`\`\` if you would like to add minikube to your path manually.
### Debian Package (.deb) [Experimental]
Download the \`minikube_${DEB_VERSION}.deb\` file, and install it using \`sudo dpkg -i minikube_$(DEB_VERSION).deb\`
Download the \`minikube_${DEB_VERSION}.deb\` file, and install it using \`sudo dpkg -i minikube_${DEB_VERSION}.deb\`
### RPM Package (.rpm) [Experimental]
Download the \`minikube-${RPM_VERSION}.rpm\` file, and install it using \`sudo rpm -i minikube-$(RPM_VERSION).rpm\`
Download the \`minikube-${RPM_VERSION}.rpm\` file, and install it using \`sudo rpm -i minikube-${RPM_VERSION}.rpm\`
### Windows [Experimental]
Download the \`minikube-windows-amd64.exe\` file, rename it to \`minikube.exe\` and add it to your path.
......
images/start.jpg

66.6 KB | W: | H:

images/start.jpg

59.6 KB | W: | H:

images/start.jpg
images/start.jpg
images/start.jpg
images/start.jpg
  • 2-up
  • Swipe
  • Onion skin
......@@ -43,7 +43,7 @@ type Bootstrapper interface {
LogCommands(LogOptions) map[string]string
SetupCerts(cfg config.KubernetesConfig) error
GetKubeletStatus() (string, error)
GetAPIServerStatus(net.IP) (string, error)
GetAPIServerStatus(net.IP, int) (string, error)
}
const (
......
......@@ -118,8 +118,8 @@ func (k *Bootstrapper) GetKubeletStatus() (string, error) {
}
// GetAPIServerStatus returns the api-server status
func (k *Bootstrapper) GetAPIServerStatus(ip net.IP) (string, error) {
url := fmt.Sprintf("https://%s:%d/healthz", ip, util.APIServerPort)
func (k *Bootstrapper) GetAPIServerStatus(ip net.IP, apiserverPort int) (string, error) {
url := fmt.Sprintf("https://%s:%d/healthz", ip, apiserverPort)
// To avoid: x509: certificate signed by unknown authority
tr := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
......
......@@ -17,458 +17,209 @@ limitations under the License.
package kubeadm
import (
"strings"
"fmt"
"io/ioutil"
"testing"
"github.com/google/go-cmp/cmp"
"github.com/pmezard/go-difflib/difflib"
"k8s.io/minikube/pkg/minikube/config"
"k8s.io/minikube/pkg/minikube/constants"
"k8s.io/minikube/pkg/minikube/cruntime"
"k8s.io/minikube/pkg/util"
)
const (
newMajor = "v1.14.0"
recentMajor = "v1.13.0"
oldMajor = "v1.12.0"
obsoleteMajor = "v1.10.0"
)
func TestGenerateKubeletConfig(t *testing.T) {
tests := []struct {
description string
cfg config.KubernetesConfig
expectedCfg string
expected string
shouldErr bool
}{
{
description: "docker runtime",
cfg: config.KubernetesConfig{
NodeIP: "192.168.1.100",
KubernetesVersion: "v1.1.0",
KubernetesVersion: recentMajor,
NodeName: "minikube",
ContainerRuntime: "docker",
},
expectedCfg: `
expected: `
[Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --allow-privileged=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cadvisor-port=0 --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --require-kubeconfig=true
ExecStart=/usr/bin/kubelet --allow-privileged=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests
[Install]
`,
},
{
description: "cri runtime",
description: "newest cri runtime",
cfg: config.KubernetesConfig{
NodeIP: "192.168.1.100",
KubernetesVersion: "v1.1.0",
KubernetesVersion: constants.NewestKubernetesVersion,
NodeName: "minikube",
ContainerRuntime: "cri-o",
},
expectedCfg: `
expected: `
[Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --allow-privileged=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cadvisor-port=0 --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=minikube --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --require-kubeconfig=true --runtime-request-timeout=15m
ExecStart=/usr/bin/kubelet --allow-privileged=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --fail-swap-on=false --hostname-override=minikube --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --runtime-request-timeout=15m
[Install]
`,
},
{
description: "docker runtime with custom image repository",
description: "docker with custom image repository",
cfg: config.KubernetesConfig{
NodeIP: "192.168.1.100",
KubernetesVersion: "v1.1.0",
KubernetesVersion: constants.DefaultKubernetesVersion,
NodeName: "minikube",
ContainerRuntime: "docker",
ImageRepository: "docker-proxy-image.io/google_containers",
},
expectedCfg: `
expected: `
[Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --allow-privileged=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cadvisor-port=0 --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-infra-container-image=docker-proxy-image.io/google_containers/pause:3.0 --pod-manifest-path=/etc/kubernetes/manifests --require-kubeconfig=true
ExecStart=/usr/bin/kubelet --allow-privileged=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-infra-container-image=docker-proxy-image.io/google_containers/pause:3.1 --pod-manifest-path=/etc/kubernetes/manifests
[Install]
`,
},
}
for _, test := range tests {
t.Run(test.description, func(t *testing.T) {
runtime, err := cruntime.New(cruntime.Config{Type: test.cfg.ContainerRuntime})
for _, tc := range tests {
t.Run(tc.description, func(t *testing.T) {
runtime, err := cruntime.New(cruntime.Config{Type: tc.cfg.ContainerRuntime})
if err != nil {
t.Fatalf("runtime: %v", err)
}
actualCfg, err := NewKubeletConfig(test.cfg, runtime)
if err != nil && !test.shouldErr {
got, err := NewKubeletConfig(tc.cfg, runtime)
if err != nil && !tc.shouldErr {
t.Errorf("got unexpected error generating config: %v", err)
return
}
if err == nil && test.shouldErr {
t.Errorf("expected error but got none, config: %s", actualCfg)
if err == nil && tc.shouldErr {
t.Errorf("expected error but got none, config: %s", got)
return
}
if diff := cmp.Diff(test.expectedCfg, actualCfg); diff != "" {
t.Errorf("actual config does not match expected. (-want +got)\n%s", diff)
diff, err := difflib.GetUnifiedDiffString(difflib.UnifiedDiff{
A: difflib.SplitLines(tc.expected),
B: difflib.SplitLines(got),
FromFile: "Expected",
ToFile: "Got",
Context: 1,
})
if err != nil {
t.Fatalf("diff error: %v", err)
}
if diff != "" {
t.Errorf("unexpected diff:\n%s", diff)
}
})
}
}
func TestGenerateConfig(t *testing.T) {
tests := []struct {
description string
cfg config.KubernetesConfig
expectedCfg string
shouldErr bool
}{
{
description: "no extra args",
cfg: config.KubernetesConfig{
NodeIP: "192.168.1.100",
KubernetesVersion: "v1.10.0",
NodeName: "minikube",
},
expectedCfg: `apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
noTaintMaster: true
api:
advertiseAddress: 192.168.1.100
bindPort: 8443
controlPlaneEndpoint: localhost
kubernetesVersion: v1.10.0
certificatesDir: /var/lib/minikube/certs/
networking:
serviceSubnet: 10.96.0.0/12
etcd:
dataDir: /data/minikube
nodeName: minikube
apiServerExtraArgs:
admission-control: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
`,
},
{
description: "extra args all components",
cfg: config.KubernetesConfig{
NodeIP: "192.168.1.101",
KubernetesVersion: "v1.10.0-alpha.0",
NodeName: "extra-args-minikube",
ExtraOptions: util.ExtraOptionSlice{
util.ExtraOption{
Component: Apiserver,
Key: "fail-no-swap",
Value: "true",
},
util.ExtraOption{
Component: ControllerManager,
Key: "kube-api-burst",
Value: "32",
},
util.ExtraOption{
Component: Scheduler,
Key: "scheduler-name",
Value: "mini-scheduler",
},
},
},
expectedCfg: `apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
noTaintMaster: true
api:
advertiseAddress: 192.168.1.101
bindPort: 8443
controlPlaneEndpoint: localhost
kubernetesVersion: v1.10.0-alpha.0
certificatesDir: /var/lib/minikube/certs/
networking:
serviceSubnet: 10.96.0.0/12
etcd:
dataDir: /data/minikube
nodeName: extra-args-minikube
apiServerExtraArgs:
admission-control: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
fail-no-swap: "true"
controllerManagerExtraArgs:
kube-api-burst: "32"
schedulerExtraArgs:
scheduler-name: "mini-scheduler"
`,
extraOpts := util.ExtraOptionSlice{
util.ExtraOption{
Component: Apiserver,
Key: "fail-no-swap",
Value: "true",
},
{
description: "extra args, v1.14.0",
cfg: config.KubernetesConfig{
NodeIP: "192.168.1.101",
KubernetesVersion: "v1.14.0-beta1",
NodeName: "extra-args-minikube-114",
ExtraOptions: util.ExtraOptionSlice{
util.ExtraOption{
Component: Apiserver,
Key: "fail-no-swap",
Value: "true",
},
util.ExtraOption{
Component: ControllerManager,
Key: "kube-api-burst",
Value: "32",
},
},
},
expectedCfg: `apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.1.101
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: extra-args-minikube-114
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"fail-no-swap: "true"
controllerManager:
extraArgs:
kube-api-burst: "32"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.14.0-beta1
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
`,
}, {
description: "two extra args for one component",
cfg: config.KubernetesConfig{
NodeIP: "192.168.1.101",
KubernetesVersion: "v1.10.0-alpha.0",
NodeName: "extra-args-minikube",
ExtraOptions: util.ExtraOptionSlice{
util.ExtraOption{
Component: Apiserver,
Key: "fail-no-swap",
Value: "true",
},
util.ExtraOption{
Component: Apiserver,
Key: "kube-api-burst",
Value: "32",
},
},
},
expectedCfg: `apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
noTaintMaster: true
api:
advertiseAddress: 192.168.1.101
bindPort: 8443
controlPlaneEndpoint: localhost
kubernetesVersion: v1.10.0-alpha.0
certificatesDir: /var/lib/minikube/certs/
networking:
serviceSubnet: 10.96.0.0/12
etcd:
dataDir: /data/minikube
nodeName: extra-args-minikube
apiServerExtraArgs:
admission-control: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
fail-no-swap: "true"
kube-api-burst: "32"
`,
util.ExtraOption{
Component: ControllerManager,
Key: "kube-api-burst",
Value: "32",
},
{
description: "enable feature gates",
cfg: config.KubernetesConfig{
NodeIP: "192.168.1.101",
KubernetesVersion: "v1.10.0-alpha.0",
NodeName: "extra-args-minikube",
FeatureGates: "HugePages=true,OtherFeature=false",
},
expectedCfg: `apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
noTaintMaster: true
api:
advertiseAddress: 192.168.1.101
bindPort: 8443
controlPlaneEndpoint: localhost
kubernetesVersion: v1.10.0-alpha.0
certificatesDir: /var/lib/minikube/certs/
networking:
serviceSubnet: 10.96.0.0/12
etcd:
dataDir: /data/minikube
nodeName: extra-args-minikube
apiServerExtraArgs:
admission-control: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
feature-gates: "HugePages=true,OtherFeature=false"
controllerManagerExtraArgs:
feature-gates: "HugePages=true,OtherFeature=false"
kubeadmExtraArgs:
feature-gates: "HugePages=true,OtherFeature=false"
schedulerExtraArgs:
feature-gates: "HugePages=true,OtherFeature=false"
`,
},
{
description: "enable feature gates and extra config",
cfg: config.KubernetesConfig{
NodeIP: "192.168.1.101",
KubernetesVersion: "v1.10.0-alpha.0",
NodeName: "extra-args-minikube",
FeatureGates: "HugePages=true,OtherFeature=false",
ExtraOptions: util.ExtraOptionSlice{
util.ExtraOption{
Component: Apiserver,
Key: "fail-no-swap",
Value: "true",
},
},
},
expectedCfg: `apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
noTaintMaster: true
api:
advertiseAddress: 192.168.1.101
bindPort: 8443
controlPlaneEndpoint: localhost
kubernetesVersion: v1.10.0-alpha.0
certificatesDir: /var/lib/minikube/certs/
networking:
serviceSubnet: 10.96.0.0/12
etcd:
dataDir: /data/minikube
nodeName: extra-args-minikube
apiServerExtraArgs:
admission-control: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
fail-no-swap: "true"
feature-gates: "HugePages=true,OtherFeature=false"
controllerManagerExtraArgs:
feature-gates: "HugePages=true,OtherFeature=false"
kubeadmExtraArgs:
feature-gates: "HugePages=true,OtherFeature=false"
schedulerExtraArgs:
feature-gates: "HugePages=true,OtherFeature=false"
`,
},
{
// Unknown components should fail silently
description: "unknown component",
cfg: config.KubernetesConfig{
NodeIP: "192.168.1.101",
KubernetesVersion: "v1.8.0-alpha.0",
NodeName: "extra-args-minikube",
ExtraOptions: util.ExtraOptionSlice{
util.ExtraOption{
Component: "not-a-real-component",
Key: "killswitch",
Value: "true",
},
},
},
shouldErr: true,
},
{
description: "custom api server port",
cfg: config.KubernetesConfig{
NodeIP: "192.168.1.100",
NodePort: 18443,
KubernetesVersion: "v1.10.0",
NodeName: "minikube",
},
expectedCfg: `apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
noTaintMaster: true
api:
advertiseAddress: 192.168.1.100
bindPort: 18443
controlPlaneEndpoint: localhost
kubernetesVersion: v1.10.0
certificatesDir: /var/lib/minikube/certs/
networking:
serviceSubnet: 10.96.0.0/12
etcd:
dataDir: /data/minikube
nodeName: minikube
apiServerExtraArgs:
admission-control: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
`,
},
{
description: "custom image repository",
cfg: config.KubernetesConfig{
NodeIP: "192.168.1.100",
KubernetesVersion: "v1.10.0",
NodeName: "minikube",
ImageRepository: "docker-proxy-image.io/google_containers",
},
expectedCfg: `apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
noTaintMaster: true
api:
advertiseAddress: 192.168.1.100
bindPort: 8443
controlPlaneEndpoint: localhost
kubernetesVersion: v1.10.0
certificatesDir: /var/lib/minikube/certs/
networking:
serviceSubnet: 10.96.0.0/12
etcd:
dataDir: /data/minikube
nodeName: minikube
imageRepository: docker-proxy-image.io/google_containers
apiServerExtraArgs:
admission-control: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
`,
util.ExtraOption{
Component: Scheduler,
Key: "scheduler-name",
Value: "mini-scheduler",
},
}
for _, test := range tests {
runtime, err := cruntime.New(cruntime.Config{Type: "docker"})
if err != nil {
t.Fatalf("runtime: %v", err)
}
t.Run(test.description, func(t *testing.T) {
gotCfg, err := generateConfig(test.cfg, runtime)
if err != nil && !test.shouldErr {
t.Errorf("got unexpected error generating config: %v", err)
return
}
if err == nil && test.shouldErr {
t.Errorf("expected error but got none, config: %s", gotCfg)
return
}
// Test version policy: Last 4 major releases (slightly looser than our general policy)
versions := map[string]string{
"default": constants.DefaultKubernetesVersion,
"new": newMajor,
"recent": recentMajor,
"old": oldMajor,
"obsolete": obsoleteMajor,
}
// cmp.Diff doesn't present diffs of multi-line text well
gotSplit := strings.Split(gotCfg, "\n")
wantSplit := strings.Split(test.expectedCfg, "\n")
if diff := cmp.Diff(gotSplit, wantSplit); diff != "" {
t.Errorf("unexpected diff: (-want +got)\n%s\ngot: %s\n", diff, gotCfg)
tests := []struct {
name string
runtime string
shouldErr bool
cfg config.KubernetesConfig
}{
{"default", "docker", false, config.KubernetesConfig{}},
{"containerd", "containerd", false, config.KubernetesConfig{}},
{"crio", "crio", false, config.KubernetesConfig{}},
{"options", "docker", false, config.KubernetesConfig{ExtraOptions: extraOpts}},
{"crio-options-gates", "crio", false, config.KubernetesConfig{ExtraOptions: extraOpts, FeatureGates: "a=b"}},
{"unknown-omponent", "docker", true, config.KubernetesConfig{ExtraOptions: util.ExtraOptionSlice{util.ExtraOption{Component: "not-a-real-component", Key: "killswitch", Value: "true"}}}},
{"containerd-api-port", "containerd", false, config.KubernetesConfig{NodePort: 12345}},
{"image-repository", "docker", false, config.KubernetesConfig{ImageRepository: "test/repo"}},
}
for vname, version := range versions {
for _, tc := range tests {
runtime, err := cruntime.New(cruntime.Config{Type: tc.runtime})
if err != nil {
t.Fatalf("runtime: %v", err)
}
})
tname := tc.name + "__" + vname
t.Run(tname, func(t *testing.T) {
cfg := tc.cfg
cfg.NodeIP = "1.1.1.1"
cfg.NodeName = "mk"
cfg.KubernetesVersion = version
got, err := generateConfig(cfg, runtime)
if err != nil && !tc.shouldErr {
t.Fatalf("got unexpected error generating config: %v", err)
}
if err == nil && tc.shouldErr {
t.Fatalf("expected error but got none, config: %s", got)
}
if tc.shouldErr {
return
}
expected, err := ioutil.ReadFile(fmt.Sprintf("testdata/%s.yaml", tname))
if err != nil {
t.Fatalf("unable to read testdata: %v", err)
}
diff, err := difflib.GetUnifiedDiffString(difflib.UnifiedDiff{
A: difflib.SplitLines(string(expected)),
B: difflib.SplitLines(got),
FromFile: "Expected",
ToFile: "Got",
Context: 1,
})
if err != nil {
t.Fatalf("diff error: %v", err)
}
if diff != "" {
t.Errorf("unexpected diff:\n%s\n===== [RAW OUTPUT] =====\n%s", diff, got)
}
})
}
}
}
......@@ -56,12 +56,12 @@ apiEndpoint:
advertiseAddress: {{.AdvertiseAddress}}
bindPort: {{.APIServerPort}}
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: {{if .CRISocket}}{{.CRISocket}}{{else}}/var/run/dockershim.sock{{end}}
name: {{.NodeName}}
......@@ -72,9 +72,10 @@ kind: ClusterConfiguration
{{if .ImageRepository}}imageRepository: {{.ImageRepository}}
{{end}}{{range .ExtraArgs}}{{.Component}}ExtraArgs:{{range $i, $val := printMapInOrder .Options ": " }}
{{$val}}{{end}}
{{end}}{{if .FeatureArgs}}featureGates: {{range $i, $val := .FeatureArgs}}
{{end -}}
{{if .FeatureArgs}}featureGates: {{range $i, $val := .FeatureArgs}}
{{$i}}: {{$val}}{{end}}
{{end}}
{{end -}}
certificatesDir: {{.CertDir}}
clusterName: kubernetes
controlPlaneEndpoint: localhost:{{.APIServerPort}}
......@@ -104,12 +105,12 @@ localAPIEndpoint:
advertiseAddress: {{.AdvertiseAddress}}
bindPort: {{.APIServerPort}}
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: {{if .CRISocket}}{{.CRISocket}}{{else}}/var/run/dockershim.sock{{end}}
name: {{.NodeName}}
......@@ -117,13 +118,16 @@ nodeRegistration:
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
{{if .ImageRepository}}imageRepository: {{.ImageRepository}}
{{ if .ImageRepository}}imageRepository: {{.ImageRepository}}
{{end}}{{range .ExtraArgs}}{{.Component}}:
extraArgs:
{{range $i, $val := printMapInOrder .Options ": " }}{{$val}}{{end}}
{{end}}{{if .FeatureArgs}}featureGates: {{range $i, $val := .FeatureArgs}}
{{$i}}: {{$val}}{{end}}
{{- range $i, $val := printMapInOrder .Options ": " }}
{{$val}}
{{- end}}
{{end -}}
{{if .FeatureArgs}}featureGates:
{{range $i, $val := .FeatureArgs}}{{$i}}: {{$val}}
{{end -}}{{end -}}
certificatesDir: {{.CertDir}}
clusterName: kubernetes
controlPlaneEndpoint: localhost:{{.APIServerPort}}
......
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 12345
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:12345
dns:
type: CoreDNS
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 12345
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:12345
dns:
type: CoreDNS
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
noTaintMaster: true
api:
advertiseAddress: 1.1.1.1
bindPort: 12345
controlPlaneEndpoint: localhost
kubernetesVersion: v1.10.0
certificatesDir: /var/lib/minikube/certs/
networking:
serviceSubnet: 10.96.0.0/12
etcd:
dataDir: /data/minikube
nodeName: mk
criSocket: /run/containerd/containerd.sock
apiServerExtraArgs:
admission-control: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 12345
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerExtraArgs:
enable-admission-plugins: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:12345
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.12.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 12345
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerExtraArgs:
enable-admission-plugins: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:12345
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.13.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
noTaintMaster: true
api:
advertiseAddress: 1.1.1.1
bindPort: 8443
controlPlaneEndpoint: localhost
kubernetesVersion: v1.10.0
certificatesDir: /var/lib/minikube/certs/
networking:
serviceSubnet: 10.96.0.0/12
etcd:
dataDir: /data/minikube
nodeName: mk
criSocket: /run/containerd/containerd.sock
apiServerExtraArgs:
admission-control: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerExtraArgs:
enable-admission-plugins: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.12.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerExtraArgs:
enable-admission-plugins: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.13.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/crio/crio.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
fail-no-swap: "true"
feature-gates: "a=b"
controllerManager:
extraArgs:
feature-gates: "a=b"
kube-api-burst: "32"
kubeadm:
extraArgs:
feature-gates: "a=b"
scheduler:
extraArgs:
feature-gates: "a=b"
scheduler-name: "mini-scheduler"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/crio/crio.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
fail-no-swap: "true"
feature-gates: "a=b"
controllerManager:
extraArgs:
feature-gates: "a=b"
kube-api-burst: "32"
kubeadm:
extraArgs:
feature-gates: "a=b"
scheduler:
extraArgs:
feature-gates: "a=b"
scheduler-name: "mini-scheduler"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
noTaintMaster: true
api:
advertiseAddress: 1.1.1.1
bindPort: 8443
controlPlaneEndpoint: localhost
kubernetesVersion: v1.10.0
certificatesDir: /var/lib/minikube/certs/
networking:
serviceSubnet: 10.96.0.0/12
etcd:
dataDir: /data/minikube
nodeName: mk
criSocket: /var/run/crio/crio.sock
apiServerExtraArgs:
admission-control: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
fail-no-swap: "true"
feature-gates: "a=b"
controllerManagerExtraArgs:
feature-gates: "a=b"
kube-api-burst: "32"
kubeadmExtraArgs:
feature-gates: "a=b"
schedulerExtraArgs:
feature-gates: "a=b"
scheduler-name: "mini-scheduler"
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/crio/crio.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerExtraArgs:
enable-admission-plugins: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
fail-no-swap: "true"
feature-gates: "a=b"
controllerManagerExtraArgs:
feature-gates: "a=b"
kube-api-burst: "32"
kubeadmExtraArgs:
feature-gates: "a=b"
schedulerExtraArgs:
feature-gates: "a=b"
scheduler-name: "mini-scheduler"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.12.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/crio/crio.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerExtraArgs:
enable-admission-plugins: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
fail-no-swap: "true"
feature-gates: "a=b"
controllerManagerExtraArgs:
feature-gates: "a=b"
kube-api-burst: "32"
kubeadmExtraArgs:
feature-gates: "a=b"
schedulerExtraArgs:
feature-gates: "a=b"
scheduler-name: "mini-scheduler"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.13.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/crio/crio.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/crio/crio.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
noTaintMaster: true
api:
advertiseAddress: 1.1.1.1
bindPort: 8443
controlPlaneEndpoint: localhost
kubernetesVersion: v1.10.0
certificatesDir: /var/lib/minikube/certs/
networking:
serviceSubnet: 10.96.0.0/12
etcd:
dataDir: /data/minikube
nodeName: mk
criSocket: /var/run/crio/crio.sock
apiServerExtraArgs:
admission-control: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/crio/crio.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerExtraArgs:
enable-admission-plugins: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.12.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/crio/crio.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerExtraArgs:
enable-admission-plugins: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.13.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
noTaintMaster: true
api:
advertiseAddress: 1.1.1.1
bindPort: 8443
controlPlaneEndpoint: localhost
kubernetesVersion: v1.10.0
certificatesDir: /var/lib/minikube/certs/
networking:
serviceSubnet: 10.96.0.0/12
etcd:
dataDir: /data/minikube
nodeName: mk
apiServerExtraArgs:
admission-control: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerExtraArgs:
enable-admission-plugins: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.12.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerExtraArgs:
enable-admission-plugins: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.13.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
imageRepository: test/repo
apiServer:
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
imageRepository: test/repo
apiServer:
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
noTaintMaster: true
api:
advertiseAddress: 1.1.1.1
bindPort: 8443
controlPlaneEndpoint: localhost
kubernetesVersion: v1.10.0
certificatesDir: /var/lib/minikube/certs/
networking:
serviceSubnet: 10.96.0.0/12
etcd:
dataDir: /data/minikube
nodeName: mk
imageRepository: test/repo
apiServerExtraArgs:
admission-control: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
imageRepository: test/repo
apiServerExtraArgs:
enable-admission-plugins: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.12.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
imageRepository: test/repo
apiServerExtraArgs:
enable-admission-plugins: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.13.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
fail-no-swap: "true"
controllerManager:
extraArgs:
kube-api-burst: "32"
scheduler:
extraArgs:
scheduler-name: "mini-scheduler"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
fail-no-swap: "true"
controllerManager:
extraArgs:
kube-api-burst: "32"
scheduler:
extraArgs:
scheduler-name: "mini-scheduler"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
noTaintMaster: true
api:
advertiseAddress: 1.1.1.1
bindPort: 8443
controlPlaneEndpoint: localhost
kubernetesVersion: v1.10.0
certificatesDir: /var/lib/minikube/certs/
networking:
serviceSubnet: 10.96.0.0/12
etcd:
dataDir: /data/minikube
nodeName: mk
apiServerExtraArgs:
admission-control: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
fail-no-swap: "true"
controllerManagerExtraArgs:
kube-api-burst: "32"
schedulerExtraArgs:
scheduler-name: "mini-scheduler"
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerExtraArgs:
enable-admission-plugins: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
fail-no-swap: "true"
controllerManagerExtraArgs:
kube-api-burst: "32"
schedulerExtraArgs:
scheduler-name: "mini-scheduler"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.12.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
advertiseAddress: 1.1.1.1
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: mk
taints: []
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerExtraArgs:
enable-admission-plugins: "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
fail-no-swap: "true"
controllerManagerExtraArgs:
kube-api-burst: "32"
schedulerExtraArgs:
scheduler-name: "mini-scheduler"
certificatesDir: /var/lib/minikube/certs/
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
etcd:
local:
dataDir: /data/minikube
kubernetesVersion: v1.13.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
......@@ -23,6 +23,7 @@ import (
"strconv"
"strings"
"github.com/golang/glog"
"github.com/pkg/errors"
)
......@@ -31,9 +32,9 @@ type MountConfig struct {
// Type is the filesystem type (Typically 9p)
Type string
// UID is the User ID which this path will be mounted as
UID int
UID string
// GID is the Group ID which this path will be mounted as
GID int
GID string
// Version is the 9P protocol version. Valid options: 9p2000, 9p200.u, 9p2000.L
Version string
// MSize is the number of bytes to use for 9p packet payload
......@@ -46,30 +47,59 @@ type MountConfig struct {
Options map[string]string
}
// hostRunner is the subset of host.Host used for mounting
type hostRunner interface {
RunSSHCommand(cmd string) (string, error)
// mountRunner is the subset of CommandRunner used for mounting
type mountRunner interface {
CombinedOutput(string) (string, error)
}
// Mount runs the mount command from the 9p client on the VM to the 9p server on the host
func Mount(h hostRunner, source string, target string, c *MountConfig) error {
if err := Unmount(h, target); err != nil {
func Mount(r mountRunner, source string, target string, c *MountConfig) error {
if err := Unmount(r, target); err != nil {
return errors.Wrap(err, "umount")
}
cmd := fmt.Sprintf("sudo mkdir -m %o -p %s && %s", c.Mode, target, mntCmd(source, target, c))
out, err := h.RunSSHCommand(cmd)
glog.Infof("Will run: %s", cmd)
out, err := r.CombinedOutput(cmd)
glog.Infof("mount err=%s, out=%s", err, out)
if err != nil {
return errors.Wrap(err, out)
}
return nil
}
// returns either a raw UID number, or the subshell to resolve it.
func resolveUID(id string) string {
_, err := strconv.ParseInt(id, 10, 64)
if err == nil {
return id
}
// Preserve behavior where unset ID == 0
if id == "" {
return "0"
}
return fmt.Sprintf(`$(id -u %s)`, id)
}
// returns either a raw GID number, or the subshell to resolve it.
func resolveGID(id string) string {
_, err := strconv.ParseInt(id, 10, 64)
if err == nil {
return id
}
// Preserve behavior where unset ID == 0
if id == "" {
return "0"
}
// Because `getent` isn't part of our ISO
return fmt.Sprintf(`$(grep ^%s: /etc/group | cut -d: -f3)`, id)
}
// mntCmd returns a mount command based on a config.
func mntCmd(source string, target string, c *MountConfig) string {
options := map[string]string{
"dfltgid": strconv.Itoa(c.GID),
"dfltuid": strconv.Itoa(c.UID),
"dfltgid": resolveGID(c.GID),
"dfltuid": resolveUID(c.UID),
}
if c.Port != 0 {
options["port"] = strconv.Itoa(c.Port)
......@@ -100,9 +130,31 @@ func mntCmd(source string, target string, c *MountConfig) string {
return fmt.Sprintf("sudo mount -t %s -o %s %s %s", c.Type, strings.Join(opts, ","), source, target)
}
// umountCmd returns a command for unmounting
func umountCmd(target string, force bool) string {
flag := ""
if force {
flag = "-f "
}
// grep because findmnt will also display the parent!
return fmt.Sprintf("findmnt -T %s | grep %s && sudo umount %s%s || true", target, target, flag, target)
}
// Unmount unmounts a path
func Unmount(h hostRunner, target string) error {
out, err := h.RunSSHCommand(fmt.Sprintf("findmnt -T %s && sudo umount %s || true", target, target))
func Unmount(r mountRunner, target string) error {
cmd := umountCmd(target, false)
glog.Infof("Will run: %s", cmd)
out, err := r.CombinedOutput(cmd)
if err == nil {
return nil
}
glog.Warningf("initial unmount error: %v, out: %s", err, out)
// Try again, using force if needed.
cmd = umountCmd(target, true)
glog.Infof("Will run: %s", cmd)
out, err = r.CombinedOutput(cmd)
glog.Infof("unmount force err=%v, out=%s", err, out)
if err != nil {
return errors.Wrap(err, out)
}
......
......@@ -23,19 +23,19 @@ import (
"github.com/google/go-cmp/cmp"
)
type mockMountHost struct {
type mockMountRunner struct {
cmds []string
T *testing.T
}
func newMockMountHost(t *testing.T) *mockMountHost {
return &mockMountHost{
func newMockMountRunner(t *testing.T) *mockMountRunner {
return &mockMountRunner{
T: t,
cmds: []string{},
}
}
func (m *mockMountHost) RunSSHCommand(cmd string) (string, error) {
func (m *mockMountRunner) CombinedOutput(cmd string) (string, error) {
m.cmds = append(m.cmds, cmd)
return "", nil
}
......@@ -54,20 +54,30 @@ func TestMount(t *testing.T) {
target: "target",
cfg: &MountConfig{Type: "9p", Mode: os.FileMode(0700)},
want: []string{
"findmnt -T target && sudo umount target || true",
"findmnt -T target | grep target && sudo umount target || true",
"sudo mkdir -m 700 -p target && sudo mount -t 9p -o dfltgid=0,dfltuid=0 src target",
},
},
{
name: "named uid",
source: "src",
target: "target",
cfg: &MountConfig{Type: "9p", Mode: os.FileMode(0700), UID: "docker", GID: "docker"},
want: []string{
"findmnt -T target | grep target && sudo umount target || true",
"sudo mkdir -m 700 -p target && sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker) src target",
},
},
{
name: "everything",
source: "10.0.0.1",
target: "/target",
cfg: &MountConfig{Type: "9p", Mode: os.FileMode(0777), UID: 82, GID: 72, Version: "9p2000.u", Options: map[string]string{
cfg: &MountConfig{Type: "9p", Mode: os.FileMode(0777), UID: "82", GID: "72", Version: "9p2000.u", Options: map[string]string{
"noextend": "",
"cache": "fscache",
}},
want: []string{
"findmnt -T /target && sudo umount /target || true",
"findmnt -T /target | grep /target && sudo umount /target || true",
"sudo mkdir -m 777 -p /target && sudo mount -t 9p -o cache=fscache,dfltgid=72,dfltuid=82,noextend,version=9p2000.u 10.0.0.1 /target",
},
},
......@@ -79,19 +89,19 @@ func TestMount(t *testing.T) {
"version": "9p2000.L",
}},
want: []string{
"findmnt -T tgt && sudo umount tgt || true",
"findmnt -T tgt | grep tgt && sudo umount tgt || true",
"sudo mkdir -m 700 -p tgt && sudo mount -t 9p -o dfltgid=0,dfltuid=0,version=9p2000.L src tgt",
},
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
h := newMockMountHost(t)
err := Mount(h, tc.source, tc.target, tc.cfg)
r := newMockMountRunner(t)
err := Mount(r, tc.source, tc.target, tc.cfg)
if err != nil {
t.Fatalf("Mount(%s, %s, %+v): %v", tc.source, tc.target, tc.cfg, err)
}
if diff := cmp.Diff(h.cmds, tc.want); diff != "" {
if diff := cmp.Diff(r.cmds, tc.want); diff != "" {
t.Errorf("command diff (-want +got): %s", diff)
}
})
......@@ -99,14 +109,14 @@ func TestMount(t *testing.T) {
}
func TestUnmount(t *testing.T) {
h := newMockMountHost(t)
err := Unmount(h, "/mnt")
r := newMockMountRunner(t)
err := Unmount(r, "/mnt")
if err != nil {
t.Fatalf("Unmount(/mnt): %v", err)
}
want := []string{"findmnt -T /mnt && sudo umount /mnt || true"}
if diff := cmp.Diff(h.cmds, want); diff != "" {
want := []string{"findmnt -T /mnt | grep /mnt && sudo umount /mnt || true"}
if diff := cmp.Diff(r.cmds, want); diff != "" {
t.Errorf("command diff (-want +got): %s", diff)
}
}
......@@ -40,7 +40,7 @@ import (
// console.SetErrFile(os.Stderr)
// console.Fatal("Oh no, everything failed.")
// NOTE: If you do not want colorized output, set MINIKUBE_IN_COLOR=false in your environment.
// NOTE: If you do not want colorized output, set MINIKUBE_IN_STYLE=false in your environment.
var (
// outFile is where Out* functions send output to. Set using SetOutFile()
......@@ -54,7 +54,7 @@ var (
// useColor is whether or not color output should be used, updated by Set*Writer.
useColor = false
// OverrideEnv is the environment variable used to override color/emoji usage
OverrideEnv = "MINIKUBE_IN_COLOR"
OverrideEnv = "MINIKUBE_IN_STYLE"
)
// fdWriter is the subset of file.File that implements io.Writer and Fd()
......@@ -70,7 +70,7 @@ func HasStyle(style string) bool {
// OutStyle writes a stylized and formatted message to stdout
func OutStyle(style, format string, a ...interface{}) error {
OutStyle, err := applyStyle(style, useColor, fmt.Sprintf(format, a...))
outStyled, err := applyStyle(style, useColor, format, a...)
if err != nil {
glog.Errorf("applyStyle(%s): %v", style, err)
if oerr := OutLn(format, a...); oerr != nil {
......@@ -78,7 +78,12 @@ func OutStyle(style, format string, a ...interface{}) error {
}
return err
}
return Out(OutStyle)
// escape any outstanding '%' signs so that they don't get interpreted
// as a formatting directive down the line
outStyled = strings.Replace(outStyled, "%", "%%", -1)
return Out(outStyled)
}
// Out writes a basic formatted string to stdout
......@@ -101,7 +106,7 @@ func OutLn(format string, a ...interface{}) error {
// ErrStyle writes a stylized and formatted error message to stderr
func ErrStyle(style, format string, a ...interface{}) error {
format, err := applyStyle(style, useColor, fmt.Sprintf(format, a...))
format, err := applyStyle(style, useColor, format, a...)
if err != nil {
glog.Errorf("applyStyle(%s): %v", style, err)
if oerr := ErrLn(format, a...); oerr != nil {
......@@ -109,6 +114,11 @@ func ErrStyle(style, format string, a ...interface{}) error {
}
return err
}
// escape any outstanding '%' signs so that they don't get interpreted
// as a formatting directive down the line
format = strings.Replace(format, "%", "%%", -1)
return Err(format)
}
......@@ -192,8 +202,8 @@ func SetErrFile(w fdWriter) {
func wantsColor(fd uintptr) bool {
// First process the environment: we allow users to force colors on or off.
//
// MINIKUBE_IN_COLOR=[1, T, true, TRUE]
// MINIKUBE_IN_COLOR=[0, f, false, FALSE]
// MINIKUBE_IN_STYLE=[1, T, true, TRUE]
// MINIKUBE_IN_STYLE=[0, f, false, FALSE]
//
// If unset, we try to automatically determine suitability from the environment.
val := os.Getenv(OverrideEnv)
......
......@@ -51,22 +51,37 @@ func TestOutStyle(t *testing.T) {
style string
envValue string
message string
params []interface{}
want string
}{
{"happy", "true", "This is happy.", "😄 This is happy.\n"},
{"Docker", "true", "This is Docker.", "🐳 This is Docker.\n"},
{"option", "true", "This is option.", " ▪ This is option.\n"},
{"happy", "false", "This is happy.", "o This is happy.\n"},
{"Docker", "false", "This is Docker.", "- This is Docker.\n"},
{"option", "false", "This is option.", " - This is option.\n"},
{"happy", "true", "This is happy.", nil, "😄 This is happy.\n"},
{"Docker", "true", "This is Docker.", nil, "🐳 This is Docker.\n"},
{"option", "true", "This is option.", nil, " ▪ This is option.\n"},
{
"option",
"true",
"Message with params: %s %s",
[]interface{}{"encode '%' signs", "%s%%%d"},
" ▪ Message with params: encode '%' signs %s%%%d\n",
},
{"happy", "false", "This is happy.", nil, "o This is happy.\n"},
{"Docker", "false", "This is Docker.", nil, "- This is Docker.\n"},
{"option", "false", "This is option.", nil, " - This is option.\n"},
{
"option",
"false",
"Message with params: %s %s",
[]interface{}{"encode '%' signs", "%s%%%d"},
" - Message with params: encode '%' signs %s%%%d\n",
},
}
for _, tc := range tests {
t.Run(tc.style+"-"+tc.envValue, func(t *testing.T) {
os.Setenv(OverrideEnv, tc.envValue)
f := newFakeFile()
SetOutFile(f)
if err := OutStyle(tc.style, tc.message); err != nil {
if err := OutStyle(tc.style, tc.message, tc.params...); err != nil {
t.Errorf("unexpected error: %q", err)
}
got := f.String()
......@@ -94,6 +109,7 @@ func TestOut(t *testing.T) {
{format: "xyz123", want: "xyz123"},
{format: "Installing Kubernetes version %s ...", lang: language.Arabic, arg: "v1.13", want: "... v1.13 تثبيت Kubernetes الإصدار"},
{format: "Installing Kubernetes version %s ...", lang: language.AmericanEnglish, arg: "v1.13", want: "Installing Kubernetes version v1.13 ..."},
{format: "Parameter encoding: %s", arg: "%s%%%d", want: "Parameter encoding: %s%%%d"},
}
for _, tc := range tests {
t.Run(tc.format, func(t *testing.T) {
......@@ -116,13 +132,13 @@ func TestErr(t *testing.T) {
os.Setenv(OverrideEnv, "0")
f := newFakeFile()
SetErrFile(f)
if err := Err("xyz123\n"); err != nil {
if err := Err("xyz123 %s\n", "%s%%%d"); err != nil {
t.Errorf("unexpected error: %q", err)
}
OutLn("unrelated message")
got := f.String()
want := "xyz123\n"
want := "xyz123 %s%%%d\n"
if got != want {
t.Errorf("Err() = %q, want %q", got, want)
......@@ -133,11 +149,11 @@ func TestErrStyle(t *testing.T) {
os.Setenv(OverrideEnv, "1")
f := newFakeFile()
SetErrFile(f)
if err := ErrStyle("fatal", "It broke"); err != nil {
if err := ErrStyle("fatal", "error: %s", "%s%%%d"); err != nil {
t.Errorf("unexpected error: %q", err)
}
got := f.String()
want := "💣 It broke\n"
want := "💣 error: %s%%%d\n"
if got != want {
t.Errorf("ErrStyle() = %q, want %q", got, want)
}
......
......@@ -66,8 +66,8 @@ var styles = map[string]style{
"log-entry": {Prefix: " "}, // Indent
"crushed": {Prefix: "💔 "},
"url": {Prefix: "👉 "},
"documentation": {Prefix: "🗎 "},
"issues": {Prefix: "📚 "},
"documentation": {Prefix: "📘 "},
"issues": {Prefix: "⁉️ "},
"issue": {Prefix: " ▪ "}, // Indented bullet
"check": {Prefix: "✔️ "},
......
......@@ -154,7 +154,13 @@ var DefaultISOURL = fmt.Sprintf("https://storage.googleapis.com/%s/minikube-%s.i
var DefaultISOSHAURL = DefaultISOURL + SHASuffix
// DefaultKubernetesVersion is the default kubernetes version
var DefaultKubernetesVersion = "v1.13.4"
var DefaultKubernetesVersion = "v1.14.0"
// NewestKubernetesVersion is the newest Kubernetes version to test against
var NewestKubernetesVersion = "v1.14.0"
// OldestKubernetesVersion is the oldest Kubernetes version to test against
var OldestKubernetesVersion = "v1.10.13"
// ConfigFilePath is the path of the config directory
var ConfigFilePath = MakeMiniPath("config")
......
......@@ -445,9 +445,6 @@ func TestContainerFunctions(t *testing.T) {
"fgh1": prefix + "coredns",
"xyz2": prefix + "storage",
}
if tc.runtime == "docker" {
runner.containers["zzz"] = "unrelated"
}
cr, err := New(Config{Type: tc.runtime, Runner: runner})
if err != nil {
t.Fatalf("New(%s): %v", tc.runtime, err)
......
......@@ -24,9 +24,6 @@ import (
"github.com/golang/glog"
)
// KubernetesContainerPrefix is the prefix of each kubernetes container
const KubernetesContainerPrefix = "k8s_"
// Docker contains Docker runtime state
type Docker struct {
Socket string
......@@ -99,7 +96,6 @@ func (r *Docker) KubeletOptions() map[string]string {
// ListContainers returns a list of containers
func (r *Docker) ListContainers(filter string) ([]string, error) {
filter = KubernetesContainerPrefix + filter
content, err := r.Runner.CombinedOutput(fmt.Sprintf(`docker ps -a --filter="name=%s" --format="{{.ID}}"`, filter))
if err != nil {
return nil, err
......
......@@ -73,7 +73,7 @@ func WithProblem(msg string, p *problem.Problem) {
console.Fatal(msg)
p.Display()
console.Err("\n")
console.ErrStyle("sad", "If the advice does not help, please let us know: ")
console.ErrStyle("sad", "If the above advice does not help, please let us know: ")
console.ErrStyle("url", "https://github.com/kubernetes/minikube/issues/new")
os.Exit(Config)
}
......
......@@ -35,12 +35,18 @@ import (
// rootCauseRe is a regular expression that matches known failure root causes
var rootCauseRe = regexp.MustCompile(`^error: |eviction manager: pods.* evicted|unknown flag: --|forbidden.*no providers available|eviction manager:.*evicted`)
// ignoreRe is a regular expression that matches spurious errors to not surface
var ignoreCauseRe = regexp.MustCompile("error: no objects passed to apply")
// importantPods are a list of pods to retrieve logs for, in addition to the bootstrapper logs.
var importantPods = []string{
"kube-apiserver",
"coredns",
"kube-scheduler",
"kube-proxy",
"kube-addon-manager",
"kubernetes-dashboard",
"storage-provisioner",
}
// lookbackwardsCount is how far back to look in a log for problems. This should be large enough to
......@@ -59,7 +65,7 @@ func Follow(r cruntime.Manager, bs bootstrapper.Bootstrapper, runner bootstrappe
// IsProblem returns whether this line matches a known problem
func IsProblem(line string) bool {
return rootCauseRe.MatchString(line)
return rootCauseRe.MatchString(line) && !ignoreCauseRe.MatchString(line)
}
// FindProblems finds possible root causes among the logs
......
......@@ -33,6 +33,7 @@ func TestIsProblem(t *testing.T) {
{"apiserver-auth-mode #2852", true, `{"log":"Error: unknown flag: --Authorization.Mode\n","stream":"stderr","time":"2018-06-17T22:16:35.134161966Z"}`},
{"apiserver-admission #3524", true, "error: unknown flag: --GenericServerRunOptions.AdmissionControl"},
{"no-providers-available #3818", true, ` kubelet.go:1662] Failed creating a mirror pod for "kube-apiserver-minikube_kube-system(c7d572aebd3d33b17fa78ae6395b6d0a)": pods "kube-apiserver-minikube" is forbidden: no providers available to validate pod request`},
{"no-objects-passed-to-apply #4010", false, "error: no objects passed to apply"},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
......
......@@ -97,7 +97,8 @@ func LoadImages(cmd bootstrapper.CommandRunner, images []string, cacheDir string
g.Go(func() error {
src := filepath.Join(cacheDir, image)
src = sanitizeCacheDir(src)
if err := LoadFromCacheBlocking(cmd, cc.KubernetesConfig, src); err != nil {
if err := loadImageFromCache(cmd, cc.KubernetesConfig, src); err != nil {
glog.Warningf("Failed to load %s: %v", src, err)
return errors.Wrapf(err, "loading image %s", src)
}
return nil
......@@ -198,14 +199,12 @@ func getWindowsVolumeNameCmd(d string) (string, error) {
return vname, nil
}
// LoadFromCacheBlocking loads images from cache, blocking until loaded
func LoadFromCacheBlocking(cr bootstrapper.CommandRunner, k8s config.KubernetesConfig, src string) error {
glog.Infoln("Loading image from cache at ", src)
// loadImageFromCache loads a single image from the cache
func loadImageFromCache(cr bootstrapper.CommandRunner, k8s config.KubernetesConfig, src string) error {
glog.Infof("Loading image from cache: %s", src)
filename := filepath.Base(src)
for {
if _, err := os.Stat(src); err == nil {
break
}
if _, err := os.Stat(src); err != nil {
return err
}
dst := path.Join(tempLoadDir, filename)
f, err := assets.NewFileAsset(src, tempLoadDir, filename, "0777")
......
......@@ -43,7 +43,7 @@ var vmProblems = map[string]match{
},
"KVM2_NO_IP": {
Regexp: re(`Error starting stopped host: Machine didn't return an IP after 120 seconds`),
Advice: "The KVM driver is unable to ressurect this old VM. Please run `minikube delete` to delete it and try again.",
Advice: "The KVM driver is unable to resurrect this old VM. Please run `minikube delete` to delete it and try again.",
Issues: []int{3901, 3566, 3434},
},
"VM_DOES_NOT_EXIST": {
......
......@@ -23,7 +23,7 @@ import (
"k8s.io/minikube/pkg/minikube/console"
)
const issueBase = "https://github.com/kubernetes/minikube/issue"
const issueBase = "https://github.com/kubernetes/minikube/issues"
// Problem represents a known problem in minikube.
type Problem struct {
......
......@@ -170,29 +170,36 @@ func printURLsForService(c corev1.CoreV1Interface, ip, service, namespace string
if err != nil {
return nil, errors.Wrapf(err, "service '%s' could not be found running", service)
}
var nodePorts []int32
if len(svc.Spec.Ports) > 0 {
for _, port := range svc.Spec.Ports {
if port.NodePort > 0 {
nodePorts = append(nodePorts, port.NodePort)
e := c.Endpoints(namespace)
endpoints, err := e.Get(service, metav1.GetOptions{})
m := make(map[int32]string)
if endpoints != nil && len(endpoints.Subsets) > 0 {
for _, ept := range endpoints.Subsets {
for _, p := range ept.Ports {
m[int32(p.Port)] = p.Name
}
}
}
urls := []string{}
for _, port := range nodePorts {
var doc bytes.Buffer
err = t.Execute(&doc, struct {
IP string
Port int32
}{
ip,
port,
})
if err != nil {
return nil, err
for _, port := range svc.Spec.Ports {
if port.NodePort > 0 {
var doc bytes.Buffer
err = t.Execute(&doc, struct {
IP string
Port int32
Name string
}{
ip,
port.NodePort,
m[port.TargetPort.IntVal],
})
if err != nil {
return nil, err
}
urls = append(urls, doc.String())
}
urls = append(urls, doc.String())
}
return urls, nil
}
......
......@@ -27,6 +27,7 @@ import (
"github.com/pkg/errors"
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/client-go/kubernetes"
corev1 "k8s.io/client-go/kubernetes/typed/core/v1"
"k8s.io/client-go/kubernetes/typed/core/v1/fake"
......@@ -36,12 +37,14 @@ import (
)
type MockClientGetter struct {
servicesMap map[string]corev1.ServiceInterface
servicesMap map[string]corev1.ServiceInterface
endpointsMap map[string]corev1.EndpointsInterface
}
func (m *MockClientGetter) GetCoreClient() (corev1.CoreV1Interface, error) {
return &MockCoreClient{
servicesMap: m.servicesMap,
servicesMap: m.servicesMap,
endpointsMap: m.endpointsMap,
}, nil
}
......@@ -51,7 +54,8 @@ func (m *MockClientGetter) GetClientset(timeout time.Duration) (*kubernetes.Clie
type MockCoreClient struct {
fake.FakeCoreV1
servicesMap map[string]corev1.ServiceInterface
servicesMap map[string]corev1.ServiceInterface
endpointsMap map[string]corev1.EndpointsInterface
}
var serviceNamespaces = map[string]corev1.ServiceInterface{
......@@ -68,8 +72,18 @@ var defaultNamespaceServiceInterface = &MockServiceInterface{
},
Spec: v1.ServiceSpec{
Ports: []v1.ServicePort{
{NodePort: int32(1111)},
{NodePort: int32(2222)},
{
NodePort: int32(1111),
TargetPort: intstr.IntOrString{
IntVal: int32(11111),
},
},
{
NodePort: int32(2222),
TargetPort: intstr.IntOrString{
IntVal: int32(22222),
},
},
},
},
},
......@@ -86,8 +100,14 @@ var defaultNamespaceServiceInterface = &MockServiceInterface{
},
}
var endpointNamespaces = map[string]corev1.EndpointsInterface{
"default": defaultNamespaceEndpointInterface,
}
var defaultNamespaceEndpointInterface = &MockEndpointsInterface{}
func (m *MockCoreClient) Endpoints(namespace string) corev1.EndpointsInterface {
return &MockEndpointsInterface{}
return m.endpointsMap[namespace]
}
func (m *MockCoreClient) Services(namespace string) corev1.ServiceInterface {
......@@ -124,6 +144,22 @@ var endpointMap = map[string]*v1.Endpoints{
},
},
},
"mock-dashboard": {
Subsets: []v1.EndpointSubset{
{
Ports: []v1.EndpointPort{
{
Name: "port1",
Port: int32(11111),
},
{
Name: "port2",
Port: int32(22222),
},
},
},
},
},
}
func (e MockEndpointsInterface) Get(name string, _ metav1.GetOptions) (*v1.Endpoints, error) {
......@@ -195,7 +231,8 @@ func TestGetServiceListFromServicesByLabel(t *testing.T) {
func TestPrintURLsForService(t *testing.T) {
defaultTemplate := template.Must(template.New("svc-template").Parse("http://{{.IP}}:{{.Port}}"))
client := &MockCoreClient{
servicesMap: serviceNamespaces,
servicesMap: serviceNamespaces,
endpointsMap: endpointNamespaces,
}
var tests = []struct {
description string
......@@ -219,6 +256,13 @@ func TestPrintURLsForService(t *testing.T) {
tmpl: template.Must(template.New("svc-arbitrary-template").Parse("{{.IP}}:{{.Port}}")),
expectedOutput: []string{"127.0.0.1:1111", "127.0.0.1:2222"},
},
{
description: "should get the name of all target ports with arbitrary format",
serviceName: "mock-dashboard",
namespace: "default",
tmpl: template.Must(template.New("svc-arbitrary-template").Parse("{{.Name}}={{.IP}}:{{.Port}}")),
expectedOutput: []string{"port1=127.0.0.1:1111", "port2=127.0.0.1:2222"},
},
{
description: "empty slice for no node ports",
serviceName: "mock-dashboard-no-ports",
......@@ -361,7 +405,8 @@ func TestGetServiceURLs(t *testing.T) {
t.Parallel()
K8s = &MockClientGetter{
servicesMap: serviceNamespaces,
servicesMap: serviceNamespaces,
endpointsMap: endpointNamespaces,
}
urls, err := GetServiceURLs(test.api, test.namespace, defaultTemplate)
if err != nil && !test.err {
......@@ -428,7 +473,8 @@ func TestGetServiceURLsForService(t *testing.T) {
t.Run(test.description, func(t *testing.T) {
t.Parallel()
K8s = &MockClientGetter{
servicesMap: serviceNamespaces,
servicesMap: serviceNamespaces,
endpointsMap: endpointNamespaces,
}
urls, err := GetServiceURLsForService(test.api, test.namespace, test.service, defaultTemplate)
if err != nil && !test.err {
......
......@@ -21,6 +21,7 @@ import (
"fmt"
"path"
"path/filepath"
"strings"
"text/template"
"time"
......@@ -47,6 +48,9 @@ type BuildrootProvisioner struct {
provision.SystemdProvisioner
}
// for escaping systemd template specifiers (e.g. '%i'), which are not supported by minikube
var systemdSpecifierEscaper = strings.NewReplacer("%", "%%")
func init() {
provision.Register("Buildroot", &provision.RegisteredProvisioner{
New: NewBuildrootProvisioner,
......@@ -64,6 +68,17 @@ func (p *BuildrootProvisioner) String() string {
return "buildroot"
}
// escapeSystemdDirectives escapes special characters in the input variables used to create the
// systemd unit file, which would otherwise be interpreted as systemd directives. An example
// are template specifiers (e.g. '%i') which are predefined variables that get evaluated dynamically
// (see systemd man pages for more info). This is not supported by minikube, thus needs to be escaped.
func escapeSystemdDirectives(engineConfigContext *provision.EngineConfigContext) {
// escape '%' in Environment option so that it does not evaluate into a template specifier
engineConfigContext.EngineOptions.Env = util.ReplaceChars(engineConfigContext.EngineOptions.Env, systemdSpecifierEscaper)
// input might contain whitespaces, wrap it in quotes
engineConfigContext.EngineOptions.Env = util.ConcatStrings(engineConfigContext.EngineOptions.Env, "\"", "\"")
}
// GenerateDockerOptions generates the *provision.DockerOptions for this provisioner
func (p *BuildrootProvisioner) GenerateDockerOptions(dockerPort int) (*provision.DockerOptions, error) {
var engineCfg bytes.Buffer
......@@ -127,6 +142,8 @@ WantedBy=multi-user.target
EngineOptions: p.EngineOptions,
}
escapeSystemdDirectives(&engineConfigContext)
if err := t.Execute(&engineCfg, engineConfigContext); err != nil {
return nil, err
}
......
......@@ -251,7 +251,7 @@ func UpdateKubeconfigIP(ip net.IP, filename string, machineName string) (bool, e
if kip.Equal(ip) {
return false, nil
}
kport, err := getPortFromKubeConfig(filename, machineName)
kport, err := GetPortFromKubeConfig(filename, machineName)
if err != nil {
return false, err
}
......@@ -291,8 +291,8 @@ func getIPFromKubeConfig(filename, machineName string) (net.IP, error) {
return ip, nil
}
// getPortFromKubeConfig returns the Port number stored for minikube in the kubeconfig specified
func getPortFromKubeConfig(filename, machineName string) (int, error) {
// GetPortFromKubeConfig returns the Port number stored for minikube in the kubeconfig specified
func GetPortFromKubeConfig(filename, machineName string) (int, error) {
con, err := ReadConfigOrNew(filename)
if err != nil {
return 0, errors.Wrap(err, "Error getting kubeconfig status")
......
......@@ -250,3 +250,26 @@ func TeePrefix(prefix string, r io.Reader, w io.Writer, logger func(format strin
}
return nil
}
// ReplaceChars returns a copy of the src slice with each string modified by the replacer
func ReplaceChars(src []string, replacer *strings.Replacer) []string {
ret := make([]string, len(src))
for i, s := range src {
ret[i] = replacer.Replace(s)
}
return ret
}
// ConcatStrings concatenates each string in the src slice with prefix and postfix and returns a new slice
func ConcatStrings(src []string, prefix string, postfix string) []string {
var buf bytes.Buffer
ret := make([]string, len(src))
for i, s := range src {
buf.WriteString(prefix)
buf.WriteString(s)
buf.WriteString(postfix)
ret[i] = buf.String()
buf.Reset()
}
return ret
}
......@@ -197,3 +197,43 @@ func TestTeePrefix(t *testing.T) {
t.Errorf("log=%q, want: %q", gotLog, wantLog)
}
}
func TestReplaceChars(t *testing.T) {
testData := []struct {
src []string
replacer *strings.Replacer
expectedRes []string
}{
{[]string{"abc%def", "%Y%"}, strings.NewReplacer("%", "X"), []string{"abcXdef", "XYX"}},
}
for _, tt := range testData {
res := ReplaceChars(tt.src, tt.replacer)
for i, val := range res {
if val != tt.expectedRes[i] {
t.Fatalf("Expected '%s' but got '%s'", tt.expectedRes, res)
}
}
}
}
func TestConcatStrings(t *testing.T) {
testData := []struct {
src []string
prefix string
postfix string
expectedRes []string
}{
{[]string{"abc", ""}, "xx", "yy", []string{"xxabcyy", "xxyy"}},
{[]string{"abc", ""}, "", "", []string{"abc", ""}},
}
for _, tt := range testData {
res := ConcatStrings(tt.src, tt.prefix, tt.postfix)
for i, val := range res {
if val != tt.expectedRes[i] {
t.Fatalf("Expected '%s' but got '%s'", tt.expectedRes, res)
}
}
}
}
......@@ -56,6 +56,8 @@ func testMounting(t *testing.T) {
} else {
mountCmd = fmt.Sprintf("mount %s:/mount-9p", tempDir)
}
t.Logf("Starting mount: %s", mountCmd)
cmd, _, _ := minikubeRunner.RunDaemon2(mountCmd)
defer func() {
err := cmd.Process.Kill()
......@@ -81,12 +83,14 @@ func testMounting(t *testing.T) {
// Create the pods we need outside the main test loop.
setupTest := func() error {
t.Logf("Deploying pod from: %s", podPath)
if _, err := kubectlRunner.RunCommand([]string{"create", "-f", podPath}); err != nil {
return err
}
return nil
}
defer func() {
t.Logf("Deleting pod from: %s", podPath)
if out, err := kubectlRunner.RunCommand([]string{"delete", "-f", podPath}); err != nil {
t.Logf("delete -f %s failed: %v\noutput: %s\n", podPath, err, out)
}
......@@ -104,6 +108,7 @@ func testMounting(t *testing.T) {
if err := pkgutil.WaitForPodsWithLabelRunning(client, "default", selector); err != nil {
t.Fatalf("Error waiting for busybox mount pod to be up: %v", err)
}
t.Logf("Pods appear to be running")
mountTest := func() error {
path := filepath.Join(tempDir, "frompod")
......@@ -161,5 +166,4 @@ func testMounting(t *testing.T) {
if err := util.Retry(t, mountTest, 5*time.Second, 40); err != nil {
t.Fatalf("mountTest failed with error: %v", err)
}
}
......@@ -19,12 +19,14 @@ limitations under the License.
package integration
import (
"fmt"
"net"
"strings"
"testing"
"time"
"github.com/docker/machine/libmachine/state"
"k8s.io/minikube/pkg/minikube/constants"
"k8s.io/minikube/test/integration/util"
)
......@@ -33,10 +35,27 @@ func TestStartStop(t *testing.T) {
name string
args []string
}{
{"docker+cache", []string{"--container-runtime=docker", "--cache-images"}},
{"docker+cache+ignore_verifications", []string{"--container-runtime=docker", "--cache-images", "--extra-config", "kubeadm.ignore-preflight-errors=SystemVerification"}},
{"containerd+cache", []string{"--container-runtime=containerd", "--docker-opt containerd=/var/run/containerd/containerd.sock", "--cache-images"}},
{"crio+cache", []string{"--container-runtime=crio", "--cache-images"}},
{"nocache_oldest", []string{
"--cache-images=false",
fmt.Sprintf("--kubernetes-version=%s", constants.OldestKubernetesVersion),
}},
{"feature_gates_newest_cni", []string{
"--feature-gates",
"ServerSideApply=true",
"--network-plugin=cni",
"--extra-config=kubelet.network-plugin=cni",
fmt.Sprintf("--kubernetes-version=%s", constants.NewestKubernetesVersion),
}},
{"containerd_and_non_default_apiserver_port", []string{
"--container-runtime=containerd",
"--docker-opt containerd=/var/run/containerd/containerd.sock",
"--apiserver-port=8444",
}},
{"crio_ignore_preflights", []string{
"--container-runtime=crio",
"--extra-config",
"kubeadm.ignore-preflight-errors=SystemVerification",
}},
}
for _, test := range tests {
......
Copyright (c) 2013, Patrick Mezard
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
The names of its contributors may not be used to endorse or promote
products derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册