diff --git a/.markdownlint.json b/.markdownlint.json index a317bea90b322ef3d70b619e98d9ce50ff299aeb..3e6170a22564308b044ee79acbe5a14e6566def8 100644 --- a/.markdownlint.json +++ b/.markdownlint.json @@ -1,6 +1,8 @@ { "no-inline-html": false, "no-trailing-punctuation": false, + "blanks-around-fences": false, + "commands-show-output": false, "ul-style": false, - "line_length": false + "line-length": false } diff --git a/docs/README.md b/docs/README.md index b41fc305f5541dd415e1aae98ef1a3fd2f23dda2..33fe2ce50372b589162365b87e2ada6eb84f377a 100644 --- a/docs/README.md +++ b/docs/README.md @@ -1,6 +1,6 @@ -## Advanced Topics and Tutorials +# Advanced Topics and Tutorials -### Cluster Configuration +## Cluster Configuration * **Alternative Runtimes** ([alternative_runtimes.md](alternative_runtimes.md)): How to run minikube with rkt as the container runtime diff --git a/docs/accessing_etcd.md b/docs/accessing_etcd.md index 0eae3e5597cd2d321c752d1ad0dff7f91258253f..80bd17f5842f5abf6e2df1d361a5b03c3bd42896 100644 --- a/docs/accessing_etcd.md +++ b/docs/accessing_etcd.md @@ -1,6 +1,9 @@ -## Accessing Host Resources From Inside A Pod -### When you have a VirtualBox driver +# Accessing Host Resources From Inside A Pod + +## When you have a VirtualBox driver + In order to access host resources from inside a pod, run the following command to determine the host IP you can use: + ```shell ip addr ``` diff --git a/docs/addons.md b/docs/addons.md index 7c83db3ab41d4247ea10e34562709927ce7026f3..e8430beaea11452cc3e578dc9fccd35bb34c5981 100644 --- a/docs/addons.md +++ b/docs/addons.md @@ -1,6 +1,7 @@ -## Add-ons +# Add-ons Minikube has a set of built in addons that can be used enabled, disabled, and opened inside of the local k8s environment. Below is an example of this functionality for the `heapster` addon: + ```shell $ minikube addons list - registry: disabled @@ -26,6 +27,7 @@ Waiting, endpoint for service is not ready yet... Waiting, endpoint for service is not ready yet... Created new window in existing browser session. ``` + The currently supported addons include: * [Kubernetes Dashboard](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard) diff --git a/docs/alternative_runtimes.md b/docs/alternative_runtimes.md index 45e40212e3404d81138cb6ce37ce3344ad9c56d2..4771fe0278a35569a42eecaa8deb9e1d827024db 100644 --- a/docs/alternative_runtimes.md +++ b/docs/alternative_runtimes.md @@ -1,4 +1,6 @@ -### Using rkt container engine +# Alternative runtimes + +## Using rkt container engine To use [rkt](https://github.com/coreos/rkt) as the container runtime run: @@ -6,8 +8,7 @@ To use [rkt](https://github.com/coreos/rkt) as the container runtime run: $ minikube start --container-runtime=rkt ``` - -### Using CRI-O +## Using CRI-O To use [CRI-O](https://github.com/kubernetes-sigs/cri-o) as the container runtime, run: @@ -27,7 +28,7 @@ $ minikube start --container-runtime=cri-o \ --extra-config=kubelet.image-service-endpoint=unix:///var/run/crio/crio.sock ``` -### Using containerd +## Using containerd To use [containerd](https://github.com/containerd/containerd) as the container runtime, run: diff --git a/docs/cache.md b/docs/cache.md index 6026c52574f16e72725712af494dcd851ce65f8a..f0efcc2950614754a1d5c0163b5c1ab6b3ffdf0b 100644 --- a/docs/cache.md +++ b/docs/cache.md @@ -1,4 +1,4 @@ -## Caching Images +# Caching Images Minikube supports caching non-minikube images using the `minikube cache` command. Images can be added to the cache by running `minikube cache add `, and deleted by running `minikube cache delete `. diff --git a/docs/configuring_kubernetes.md b/docs/configuring_kubernetes.md index 60d9761f934e0aeeed332f2147e31374e5204b0f..fead924028903ac5598abd71f5872ddec296b683 100644 --- a/docs/configuring_kubernetes.md +++ b/docs/configuring_kubernetes.md @@ -1,11 +1,11 @@ -## Configuring Kubernetes +# Configuring Kubernetes Minikube has a "configurator" feature that allows users to configure the Kubernetes components with arbitrary values. To use this feature, you can use the `--extra-config` flag on the `minikube start` command. This flag is repeated, so you can pass it several times with several different values to set multiple options. -### Kubeadm bootstrapper +## Kubeadm bootstrapper The kubeadm bootstrapper can be configured by the `--extra-config` flag on the `minikube start` command. It takes a string of the form `component.key=value` where `component` is one of the strings diff --git a/docs/contributors/README.md b/docs/contributors/README.md index 74138c75196939dcf0385ee4ca0d89b242a1aba6..0e17ea28d71ffc86750c6891119b780f8d8cf54c 100644 --- a/docs/contributors/README.md +++ b/docs/contributors/README.md @@ -1,10 +1,11 @@ -## Contributing +# Contributing * **New contributors** ([contributors.md](https://github.com/kubernetes/minikube/blob/master/CONTRIBUTING.md)): Process for new contributors, CLA instructions * **Roadmap** ([roadmap.md](roadmap.md)): The roadmap for future minikube development ## New Features and Dependencies + * **Adding a dependency** ([adding_a_dependency.md](adding_a_dependency.md)): How to add or update vendored code * **Adding a new addon** ([adding_an_addon.md](adding_an_addon.md)): How to add a new addon to minikube for `minikube addons` @@ -12,6 +13,7 @@ * **Adding a new driver** ([adding_driver.md](adding_driver.md)): How to add a new driver to minikube for `minikube create --vm-driver=` ## Building and Releasing + * **Build Guide** ([build_guide.md](build_guide.md)): How to build minikube from source * **ISO Build Guide** ([minikube_iso.md](minikube_iso.md)): How to build and hack on the ISO image that minikube uses diff --git a/docs/contributors/adding_a_dependency.md b/docs/contributors/adding_a_dependency.md index d758b16d6c5f486848b8d25efaf4ccc27bea3e01..39ce272405671d17bbf66a798379dcab3b8bfb55 100644 --- a/docs/contributors/adding_a_dependency.md +++ b/docs/contributors/adding_a_dependency.md @@ -1,4 +1,5 @@ -#### Adding a New Dependency +# Adding a New Dependency + Minikube uses `dep` to manage vendored dependencies. See the `dep` [documentation](https://golang.github.io/dep/docs/introduction.html) for installation and usage instructions. diff --git a/docs/contributors/adding_an_addon.md b/docs/contributors/adding_an_addon.md index 0010e343248e17df4f465efe0f997a4f7b2e27fc..fff6bc005b666a55a829e84fd1075d271ba0dfe0 100644 --- a/docs/contributors/adding_an_addon.md +++ b/docs/contributors/adding_an_addon.md @@ -1,4 +1,5 @@ -#### Adding a New Addon +# Adding a New Addon + To add a new addon to minikube the following steps are required: * For the new addon's .yaml file(s): @@ -15,12 +16,12 @@ To add a new addon to minikube the following steps are required: var settings = []Setting{ ..., // add other addon setting - { - name: "efk", - set: SetBool, - validations: []setFn{IsValidAddon}, - callbacks: []setFn{EnableOrDisableAddon}, - }, + { + name: "efk", + set: SetBool, + validations: []setFn{IsValidAddon}, + callbacks: []setFn{EnableOrDisableAddon}, + }, } ``` @@ -32,22 +33,22 @@ To add a new addon to minikube the following steps are required: ..., // add other addon asset "efk": NewAddon([]*BinDataAsset{ - NewBinDataAsset( - "deploy/addons/efk/efk-configmap.yaml", - constants.AddonsPath, - "efk-configmap.yaml", - "0640"), - NewBinDataAsset( - "deploy/addons/efk/efk-rc.yaml", - constants.AddonsPath, - "efk-rc.yaml", - "0640"), - NewBinDataAsset( - "deploy/addons/efk/efk-svc.yaml", - constants.AddonsPath, - "efk-svc.yaml", - "0640"), - }, false, "efk"), + NewBinDataAsset( + "deploy/addons/efk/efk-configmap.yaml", + constants.AddonsPath, + "efk-configmap.yaml", + "0640"), + NewBinDataAsset( + "deploy/addons/efk/efk-rc.yaml", + constants.AddonsPath, + "efk-rc.yaml", + "0640"), + NewBinDataAsset( + "deploy/addons/efk/efk-svc.yaml", + constants.AddonsPath, + "efk-svc.yaml", + "0640"), + }, false, "efk"), } ``` diff --git a/docs/contributors/adding_driver.md b/docs/contributors/adding_driver.md index b06318e708c9c7c06bcb62c9633ec4b34f4b8a95..3d9b0d5f668923f7fd7de02d4685b04c332b4bde 100644 --- a/docs/contributors/adding_driver.md +++ b/docs/contributors/adding_driver.md @@ -1,6 +1,6 @@ # Adding new driver (Deprecated) -New drivers should be added into https://github.com/machine-drivers +New drivers should be added into Minikube relies on docker machine drivers to manage machines. This document talks about how to add an existing docker machine driver into minikube registry, so that minikube can use the driver @@ -26,7 +26,7 @@ Registry is what minikube uses to register all the supported drivers. The driver their drivers in registry, and minikube runtime will look at the registry to find a driver and use the driver metadata to determine what workflow to apply while those drivers are being used. -The godoc of registry is available here: https://godoc.org/k8s.io/minikube/pkg/minikube/registry +The godoc of registry is available here: [DriverDef](https://godoc.org/k8s.io/minikube/pkg/minikube/registry#DriverDef) is the main struct to define a driver metadata. Essentially, you need to define 4 things at most, which is @@ -52,33 +52,33 @@ All drivers are located in `k8s.io/minikube/pkg/minikube/drivers`. Take `vmwaref package vmwarefusion import ( - "github.com/docker/machine/drivers/vmwarefusion" - "github.com/docker/machine/libmachine/drivers" - cfg "k8s.io/minikube/pkg/minikube/config" - "k8s.io/minikube/pkg/minikube/constants" - "k8s.io/minikube/pkg/minikube/registry" + "github.com/docker/machine/drivers/vmwarefusion" + "github.com/docker/machine/libmachine/drivers" + cfg "k8s.io/minikube/pkg/minikube/config" + "k8s.io/minikube/pkg/minikube/constants" + "k8s.io/minikube/pkg/minikube/registry" ) func init() { - registry.Register(registry.DriverDef{ - Name: "vmwarefusion", - Builtin: true, - ConfigCreator: createVMwareFusionHost, - DriverCreator: func() drivers.Driver { - return vmwarefusion.NewDriver("", "") - }, - }) + registry.Register(registry.DriverDef{ + Name: "vmwarefusion", + Builtin: true, + ConfigCreator: createVMwareFusionHost, + DriverCreator: func() drivers.Driver { + return vmwarefusion.NewDriver("", "") + }, + }) } func createVMwareFusionHost(config cfg.MachineConfig) interface{} { - d := vmwarefusion.NewDriver(cfg.GetMachineName(), constants.GetMinipath()).(*vmwarefusion.Driver) - d.Boot2DockerURL = config.Downloader.GetISOFileURI(config.MinikubeISO) - d.Memory = config.Memory - d.CPU = config.CPUs - d.DiskSize = config.DiskSize - d.SSHPort = 22 - d.ISO = d.ResolveStorePath("boot2docker.iso") - return d + d := vmwarefusion.NewDriver(cfg.GetMachineName(), constants.GetMinipath()).(*vmwarefusion.Driver) + d.Boot2DockerURL = config.Downloader.GetISOFileURI(config.MinikubeISO) + d.Memory = config.Memory + d.CPU = config.CPUs + d.DiskSize = config.DiskSize + d.SSHPort = 22 + d.ISO = d.ResolveStorePath("boot2docker.iso") + return d } ``` @@ -98,4 +98,3 @@ In summary, the process includes the following steps: 2. Add import in `pkg/minikube/cluster/default_drivers.go` Any Questions: please ping your friend [@anfernee](https://github.com/anfernee) - diff --git a/docs/contributors/build_guide.md b/docs/contributors/build_guide.md index b80068358e9b07afd05acdf8d944419b05fc5302..67a12be71a97477a9d4103745b0283cb1a0460f1 100644 --- a/docs/contributors/build_guide.md +++ b/docs/contributors/build_guide.md @@ -1,20 +1,23 @@ -### Build Requirements +# Build Guide + +## Build Requirements + * A recent Go distribution (>=1.12) * If you're not on Linux, you'll need a Docker installation * minikube requires at least 4GB of RAM to compile, which can be problematic when using docker-machine -#### Prerequisites for different GNU/Linux distributions +### Prerequisites for different GNU/Linux distributions -##### Fedora -On Fedora you need to install _glibc-static_ +#### Fedora +On Fedora you need to install _glibc-static_ ```shell $ sudo dnf install -y glibc-static ``` ### Building from Source -Clone minikube into your go path under `$GOPATH/src/k8s.io` +Clone minikube into your go path under `$GOPATH/src/k8s.io` ```shell $ git clone https://github.com/kubernetes/minikube.git $GOPATH/src/k8s.io/minikube $ cd $GOPATH/src/k8s.io/minikube @@ -25,20 +28,23 @@ Note: Make sure that you uninstall any previous versions of minikube before buil from the source. ### Building from Source in Docker (using Debian stretch image with golang) + Clone minikube: ```shell $ git clone https://github.com/kubernetes/minikube.git ``` + Build (cross compile for linux / OS X and Windows) using make: ```shell $ cd minikube $ docker run --rm -v "$PWD":/go/src/k8s.io/minikube -w /go/src/k8s.io/minikube golang:stretch make cross ``` + Check "out" directory: ```shell $ ls out/ -docker-machine-driver-hyperkit.d minikube minikube.d test.d -docker-machine-driver-kvm2.d minikube-linux-amd64 storage-provisioner.d +docker-machine-driver-hyperkit.d minikube minikube.d test.d +docker-machine-driver-kvm2.d minikube-linux-amd64 storage-provisioner.d ``` ### Run Instructions @@ -77,11 +83,14 @@ You can run these against minikube by following these steps: * Run `make quick-release` in the k8s repo. * Start up a minikube cluster with: `minikube start`. * Set following two environment variables: + ```shell export KUBECONFIG=$HOME/.kube/config export KUBERNETES_CONFORMANCE_TEST=y ``` + * Run the tests (from the k8s repo): + ```shell go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Conformance\]" --check-version-skew=false ``` diff --git a/docs/contributors/ci_builds.md b/docs/contributors/ci_builds.md index 0bfe2645b80af965487a7fe716fd38ad289bc6d0..07304aea754745fcba76940743f625773ddcca21 100644 --- a/docs/contributors/ci_builds.md +++ b/docs/contributors/ci_builds.md @@ -1,9 +1,11 @@ -### CI Builds +# CI Builds We publish CI builds of minikube, built at every Pull Request. Builds are available at (substitute in the relevant PR number): -- https://storage.googleapis.com/minikube-builds/PR_NUMBER/minikube-darwin-amd64 -- https://storage.googleapis.com/minikube-builds/PR_NUMBER/minikube-linux-amd64 -- https://storage.googleapis.com/minikube-builds/PR_NUMBER/minikube-windows-amd64.exe + +- +- +- We also publish CI builds of minikube-iso, built at every Pull Request that touches deploy/iso/minikube-iso. Builds are available at: -- https://storage.googleapis.com/minikube-builds/PR_NUMBER/minikube-testing.iso + +- diff --git a/docs/contributors/minikube_iso.md b/docs/contributors/minikube_iso.md index a4d2fc6782407c0d35729a4d2664a01f4e658b6f..6b4c1dd4038a95d45629bb221ec44ed7ec6f1555 100644 --- a/docs/contributors/minikube_iso.md +++ b/docs/contributors/minikube_iso.md @@ -1,8 +1,9 @@ -## minikube ISO image +# minikube ISO image This includes the configuration for an alternative bootable ISO image meant to be used in conjunction with minikube. It includes: + - systemd as the init system - rkt - docker @@ -13,9 +14,10 @@ It includes: ### Requirements * Linux + ```shell sudo apt-get install build-essential gnupg2 p7zip-full git wget cpio python \ - unzip bc gcc-multilib automake libtool locales + unzip bc gcc-multilib automake libtool locales ``` Either import your private key or generate a sign-only key using `gpg2 --gen-key`. @@ -69,7 +71,6 @@ $ git status ### Saving buildroot/kernel configuration changes - To make any kernel configuration changes and save them, execute: ```shell diff --git a/docs/contributors/principles.md b/docs/contributors/principles.md index 6a84b5a41fbb3da071adf8dd291cc47851bbcb93..f740678d6b7882fa2bf34121a71536191e92da5c 100644 --- a/docs/contributors/principles.md +++ b/docs/contributors/principles.md @@ -22,5 +22,4 @@ Here are some specific minikube features that align with our goal: ## Non-Goals * Simplifying Kubernetes production deployment experience - * Supporting all possible deployment configurations of Kubernetes like various types of storage, networking, etc. - +* Supporting all possible deployment configurations of Kubernetes like various types of storage, networking, etc. diff --git a/docs/contributors/releasing_minikube.md b/docs/contributors/releasing_minikube.md index 3a1ab83c3956ff8e799451c9ed95f35c2c73ed9a..c17fb388ef2dce044a5b8800273e4989a7d4cf9b 100644 --- a/docs/contributors/releasing_minikube.md +++ b/docs/contributors/releasing_minikube.md @@ -7,16 +7,16 @@ ## Build a new ISO -Major releases always get a new ISO. Minor bugfixes may or may not require it: check for changes in the `deploy/iso` folder. +Major releases always get a new ISO. Minor bugfixes may or may not require it: check for changes in the `deploy/iso` folder. Note: you can build the ISO using the `hack/jenkins/build_iso.sh` script locally. - * navigate to the minikube ISO jenkins job - * Ensure that you are logged in (top right) - * Click "▶️ Build with Parameters" (left) - * For `ISO_VERSION`, type in the intended release version (same as the minikube binary's version) - * For `ISO_BUCKET`, type in `minikube/iso` - * Click *Build* +* navigate to the minikube ISO jenkins job +* Ensure that you are logged in (top right) +* Click "▶️ Build with Parameters" (left) +* For `ISO_VERSION`, type in the intended release version (same as the minikube binary's version) +* For `ISO_BUCKET`, type in `minikube/iso` +* Click *Build* The build will take roughly 50 minutes. @@ -47,7 +47,7 @@ env BUILD_IN_DOCKER=y make cross checksum Once submitted, HEAD will use the new ISO. Please pay attention to test failures, as this is our integration test across platforms. If there are known acceptable failures, please add a PR comment linking to the appropriate issue. -## Update Release Notes +## Update Release Notes Run the following script to update the release notes: @@ -59,11 +59,11 @@ Merge the output into CHANGELOG.md. See [PR#3175](https://github.com/kubernetes/ ## Tag the Release -NOTE: Confirm that all release-related PR's have been submitted before doing this step. +NOTE: Confirm that all release-related PR's have been submitted before doing this step. Do this in a direct clone of the upstream kubernetes/minikube repository (not your fork!): -``` +```shell version= git fetch git checkout master @@ -76,12 +76,12 @@ git push origin v$version This step uses the git tag to publish new binaries to GCS and create a github release: - * navigate to the minikube "Release" jenkins job - * Ensure that you are logged in (top right) - * Click "▶️ Build with Parameters" (left) - * `VERSION_MAJOR`, `VERSION_MINOR`, and `VERSION_BUILD` should reflect the values in your Makefile - * For `ISO_SHA256`, run: `gsutil cat gs://minikube/iso/minikube-v.iso.sha256` - * Click *Build* +* navigate to the minikube "Release" jenkins job +* Ensure that you are logged in (top right) +* Click "▶️ Build with Parameters" (left) +* `VERSION_MAJOR`, `VERSION_MINOR`, and `VERSION_BUILD` should reflect the values in your Makefile +* For `ISO_SHA256`, run: `gsutil cat gs://minikube/iso/minikube-v.iso.sha256` +* Click *Build* ## Check releases.json @@ -95,8 +95,8 @@ These are downstream packages that are being maintained by others and how to upg | Package Manager | URL | TODO | | --- | --- | --- | -| Arch Linux AUR | https://aur.archlinux.org/packages/minikube/ | "Flag as package out-of-date" -| Brew Cask | https://github.com/Homebrew/homebrew-cask/blob/master/Casks/minikube.rb | The release job creates a new PR in [Homebrew/homebrew-cask](https://github.com/Homebrew/homebrew-cask) with an updated version and SHA256, double check that it's created. +| Arch Linux AUR | | "Flag as package out-of-date" +| Brew Cask | | The release job creates a new PR in [Homebrew/homebrew-cask](https://github.com/Homebrew/homebrew-cask) with an updated version and SHA256, double check that it's created. ## Verification @@ -104,7 +104,7 @@ Verify release checksums by running`make check-release` ## Update docs -If there are major changes, please send a PR to update https://kubernetes.io/docs/setup/minikube/ +If there are major changes, please send a PR to update ## Announce! diff --git a/docs/debugging.md b/docs/debugging.md index 903cf8329f9887dbf5b7dc1b6d5031a6f5b87789..cb02133ffac8c0c184b6b6d0c1d072bedfedb03d 100644 --- a/docs/debugging.md +++ b/docs/debugging.md @@ -1,5 +1,7 @@ -### Debugging Issues With Minikube +# Debugging Issues With Minikube + To debug issues with minikube (not *Kubernetes* but **minikube** itself), you can use the `-v` flag to see debug level info. The specified values for `-v` will do the following (the values are all encompassing in that higher values will give you all lower value outputs as well): + * `--v=0` will output **INFO** level logs * `--v=1` will output **WARNING** level logs * `--v=2` will output **ERROR** level logs @@ -11,6 +13,5 @@ Example: If you need to access additional tools for debugging, minikube also includes the [CoreOS toolbox](https://github.com/coreos/toolbox) - You can ssh into the toolbox and access these additional commands using: `minikube ssh toolbox` diff --git a/docs/drivers.md b/docs/drivers.md index 2f3590de18483e305d7fe113a059d1636f466363..2d0bdd88d3116727b7d8f3f947dcc3e68838ffe0 100644 --- a/docs/drivers.md +++ b/docs/drivers.md @@ -14,7 +14,7 @@ the host PATH: * [HyperV](#hyperv-driver) * [VMware](#vmware-unified-driver) -#### KVM2 driver +## KVM2 driver To install the KVM2 driver, first install and configure the prereqs: @@ -36,14 +36,14 @@ sudo apt install libvirt-bin libvirt-daemon-system qemu-kvm sudo yum install libvirt-daemon-kvm qemu-kvm ``` -Enable,start, and verify the libvirtd service has started. +Enable,start, and verify the libvirtd service has started. + ```shell sudo systemctl enable libvirtd.service sudo systemctl start libvirtd.service sudo systemctl status libvirtd.service ``` - Then you will need to add yourself to libvirt group (older distributions may use libvirtd instead) `sudo usermod -a -G libvirt $(whoami)` @@ -59,8 +59,7 @@ curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine- && sudo install docker-machine-driver-kvm2 /usr/local/bin/ ``` - -NOTE: Ubuntu users on a release older than 18.04, or anyone experiencing [#3206: Error creating new host: dial tcp: missing address.](https://github.com/kubernetes/minikube/issues/3206) you will need to build your own driver until [#3689](https://github.com/kubernetes/minikube/issues/3689) is resolved. Building this binary will require [Go v1.11](https://golang.org/dl/) or newer to be installed. +NOTE: Ubuntu users on a release older than 18.04, or anyone experiencing [#3206: Error creating new host: dial tcp: missing address.](https://github.com/kubernetes/minikube/issues/3206) you will need to build your own driver until [#3689](https://github.com/kubernetes/minikube/issues/3689) is resolved. Building this binary will require [Go v1.11](https://golang.org/dl/) or newer to be installed. ```shell sudo apt install libvirt-dev @@ -90,18 +89,17 @@ and run minikube as usual: minikube start ``` -#### Hyperkit driver +## Hyperkit driver The Hyperkit driver will eventually replace the existing xhyve driver. It is built from the minikube source tree, and uses [moby/hyperkit](http://github.com/moby/hyperkit) as a Go library. To install the hyperkit driver via brew: - ```shell brew install docker-machine-driver-hyperkit -# docker-machine-driver-hyperkit need root owner and uid +# docker-machine-driver-hyperkit need root owner and uid sudo chown root:wheel /usr/local/opt/docker-machine-driver-hyperkit/bin/docker-machine-driver-hyperkit sudo chmod u+s /usr/local/opt/docker-machine-driver-hyperkit/bin/docker-machine-driver-hyperkit ``` @@ -139,7 +137,7 @@ and run minikube as usual: minikube start ``` -#### HyperV driver +## HyperV driver Hyper-v users may need to create a new external network switch as described [here](https://docs.docker.com/machine/drivers/hyper-v/). This step may prevent a problem in which `minikube start` hangs indefinitely, unable to ssh into the minikube virtual machine. In this add, add the `--hyperv-virtual-switch=switch-name` argument to the `minikube start` command. @@ -150,6 +148,7 @@ To use the driver: ```shell minikube start --vm-driver hyperv --hyperv-virtual-switch=switch-name ``` + or, to use hyperv as a default driver: ```shell @@ -162,12 +161,12 @@ and run minikube as usual: minikube start ``` -#### VMware unified driver +## VMware unified driver The VMware unified driver will eventually replace the existing vmwarefusion driver. The new unified driver supports both VMware Fusion (on macOS) and VMware Workstation (on Linux and Windows) -To install the vmware unified driver, head over at https://github.com/machine-drivers/docker-machine-driver-vmware/releases and download the release for your operating system. +To install the vmware unified driver, head over at and download the release for your operating system. The driver must be: @@ -201,4 +200,3 @@ and run minikube as usual: ```shell minikube start ``` - diff --git a/docs/env_vars.md b/docs/env_vars.md index 3afe898697b2cc736088e2b58e61fada2fffedde..f531d4615e6e8ae8f4b9b474c0b713e748f8f3b0 100644 --- a/docs/env_vars.md +++ b/docs/env_vars.md @@ -3,7 +3,7 @@ ## Config option variables -minikube supports passing environment variables instead of flags for every value listed in `minikube config list`. This is done by passing an environment variable with the prefix `MINIKUBE_`. +minikube supports passing environment variables instead of flags for every value listed in `minikube config list`. This is done by passing an environment variable with the prefix `MINIKUBE_`. For example the `minikube start --iso-url="$ISO_URL"` flag can also be set by setting the `MINIKUBE_ISO_URL="$ISO_URL"` environment variable. @@ -46,7 +46,7 @@ MINIKUBE_ENABLE_PROFILING=1 minikube start Output: -``` +``` text 2017/01/09 13:18:00 profile: cpu profiling enabled, /tmp/profile933201292/cpu.pprof Starting local Kubernetes cluster... Kubectl is now configured to use the cluster. diff --git a/docs/gpu.md b/docs/gpu.md index ab3b00c2e69fdc3d06640aa2c05ddff957a30a9d..1d2faa2dd1234fc6d1c2a05af3d24121cca23f55 100644 --- a/docs/gpu.md +++ b/docs/gpu.md @@ -32,35 +32,38 @@ host to the minikube VM. Doing so has a few prerequisites: - Once you reboot the system after doing the above, you should be ready to use GPUs with kvm2. Run the following command to start minikube: - ``` + ```shell minikube start --vm-driver kvm2 --gpu ``` + This command will check if all the above conditions are satisfied and passthrough spare GPUs found on the host to the VM. If this succeeded, run the following commands: - ``` + ```shell minikube addons enable nvidia-gpu-device-plugin minikube addons enable nvidia-driver-installer ``` + This will install the NVIDIA driver (that works for GeForce/Quadro cards) on the VM. - If everything succeeded, you should be able to see `nvidia.com/gpu` in the capacity: - ``` + ```shell kubectl get nodes -ojson | jq .items[].status.capacity ``` ### Where can I learn more about GPU passthrough? + See the excellent documentation at -https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF + ### Why are so many manual steps required to use GPUs with kvm2 on minikube? + These steps require elevated privileges which minikube doesn't run with and they are disruptive to the host, so we decided to not do them automatically. - ## Using NVIDIA GPU on minikube on Linux with `--vm-driver=none` NOTE: This approach used to expose GPUs here is different than the approach used @@ -70,26 +73,28 @@ to expose GPUs with `--vm-driver=kvm2`. Please don't mix these instructions. - Install the nvidia driver, nvidia-docker and configure docker with nvidia as the default runtime. See instructions at - https://github.com/NVIDIA/nvidia-docker + - Start minikube: - ``` + ```shell minikube start --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost ``` - Install NVIDIA's device plugin: - ``` + ```shell kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.10/nvidia-device-plugin.yml ``` - ## Why does minikube not support NVIDIA GPUs on macOS? + VM drivers supported by minikube for macOS doesn't support GPU passthrough: + - [mist64/xhyve#108](https://github.com/mist64/xhyve/issues/108) - [moby/hyperkit#159](https://github.com/moby/hyperkit/issues/159) - [VirtualBox docs](http://www.virtualbox.org/manual/ch09.html#pcipassthrough) Also: + - For quite a while, all Mac hardware (both laptops and desktops) have come with Intel or AMD GPUs (and not with NVIDIA GPUs). Recently, Apple added [support for eGPUs](https://support.apple.com/en-us/HT208544), but even then all the @@ -98,8 +103,8 @@ Also: - nvidia-docker [doesn't support macOS](https://github.com/NVIDIA/nvidia-docker/issues/101) either. - ## Why does minikube not support NVIDIA GPUs on Windows? + minikube supports Windows host through Hyper-V or VirtualBox. - VirtualBox doesn't support PCI passthrough for [Windows diff --git a/docs/host_folder_mount.md b/docs/host_folder_mount.md index d19de3e827c38080810e87afdbefa26a9f27afab..92f454cbc1e2c43bb33df4d6143aeb526987bd4d 100644 --- a/docs/host_folder_mount.md +++ b/docs/host_folder_mount.md @@ -1,5 +1,5 @@ +# Mounting Host Folders -## Mounting Host Folders `minikube mount /path/to/dir/to/mount:/vm-mount-path` is the recommended way to mount directories into minikube so that they can be used in your local Kubernetes cluster. The command works on all supported platforms. Below is an example workflow for using `minikube mount`: ```shell diff --git a/docs/http_proxy.md b/docs/http_proxy.md index cbd513916486a19d24b1bc19fd853d080507e7e5..88e7a1a5589be92469c3c52b487a4d2eff78cd15 100644 --- a/docs/http_proxy.md +++ b/docs/http_proxy.md @@ -1,4 +1,4 @@ -## Using Minikube with an HTTP Proxy +# Using Minikube with an HTTP Proxy minikube requires access to the internet via HTTP, HTTPS, and DNS protocols. If a HTTP proxy is required to access the internet, you may need to pass the proxy connection information to both minikube and Docker using environment variables: @@ -17,7 +17,7 @@ One important note: If NO_PROXY is required by non-Kubernetes applications, such ### macOS and Linux -``` +```shell export HTTP_PROXY=http:// export HTTPS_PROXY=https:// export NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.1/24 @@ -30,7 +30,7 @@ To make the exported variables permanent, consider adding the declarations to ~/ ### Windows -``` +```shell set HTTP_PROXY=http:// set HTTPS_PROXY=https:// set NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.1/24 @@ -45,7 +45,7 @@ To set these environment variables permanently, consider adding these to your [s ### unable to cache ISO... connection refused -``` +```text Unable to start VM: unable to cache ISO: https://storage.googleapis.com/minikube/iso/minikube.iso: failed to download: failed to download to temp file: download failed: 5 error(s) occurred: @@ -57,8 +57,8 @@ This error indicates that the host:port combination defined by HTTPS_PROXY or HT ## Unable to pull images..Client.Timeout exceeded while awaiting headers -``` -Unable to pull images, which may be OK: +```text +Unable to pull images, which may be OK: failed to pull image "k8s.gcr.io/kube-apiserver:v1.13.3": output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection @@ -69,7 +69,7 @@ This error indicates that the container runtime running within the VM does not h ## x509: certificate signed by unknown authority -``` +```text [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: x509: certificate signed by unknown authority @@ -83,7 +83,6 @@ Ask your IT department for the appropriate PEM file, and add it to: Then run `minikube delete` and `minikube start`. - ## Additional Information -- [Configure Docker to use a proxy server](https://docs.docker.com/network/proxy/) +* [Configure Docker to use a proxy server](https://docs.docker.com/network/proxy/) diff --git a/docs/insecure_registry.md b/docs/insecure_registry.md index 448ecdb4ecf5d3f4f0b3ab7ed4339d41da9dd7cf..fa02d603d322c4011ff184e04a5b41d6ae348f19 100644 --- a/docs/insecure_registry.md +++ b/docs/insecure_registry.md @@ -1,4 +1,4 @@ -## Enabling Docker Insecure Registry +# Enabling Docker Insecure Registry Minikube allows users to configure the docker engine's `--insecure-registry` flag. You can use the `--insecure-registry` flag on the `minikube start` command to enable insecure communication between the docker engine and registries listening to requests from the CIDR range. @@ -8,7 +8,9 @@ with TLS certificates. Because the default service cluster IP is known to be ava deployed inside the cluster by creating the cluster with `minikube start --insecure-registry "10.0.0.0/24"`. ## Private Container Registries + **GCR/ECR/Docker**: Minikube has an addon, `registry-creds` which maps credentials into Minikube to support pulling from Google Container Registry (GCR), Amazon's EC2 Container Registry (ECR), and Private Docker registries. You will need to run `minikube addons configure registry-creds` and `minikube addons enable registry-creds` to get up and running. An example of this is below: + ```shell $ minikube addons configure registry-creds Do you want to enable AWS Elastic Container Registry? [y/n]: n diff --git a/docs/networking.md b/docs/networking.md index 5efff8bf5fc88b87b4ca7b93c4eaeaeda99c2818..3e0bbef346926852943152892c25c8443b136398 100644 --- a/docs/networking.md +++ b/docs/networking.md @@ -1,4 +1,4 @@ -## Networking +# Networking The minikube VM is exposed to the host system via a host-only IP address, that can be obtained with the `minikube ip` command. Any services of type `NodePort` can be accessed over that IP address, on the NodePort. @@ -11,52 +11,52 @@ We also have a shortcut for fetching the minikube IP and a service's `NodePort`: `minikube service --url $SERVICE` -### LoadBalancer emulation (`minikube tunnel`) +## LoadBalancer emulation (`minikube tunnel`) -Services of type `LoadBalancer` can be exposed via the `minikube tunnel` command. +Services of type `LoadBalancer` can be exposed via the `minikube tunnel` command. ````shell minikube tunnel -```` +```` Will output: -``` +```text out/minikube tunnel Password: ***** -Status: +Status: machine: minikube pid: 59088 route: 10.96.0.0/12 -> 192.168.99.101 minikube: Running services: [] - errors: + errors: minikube: no errors router: no errors loadbalancer emulator: no errors -```` +```` Tunnel might ask you for password for creating and deleting network routes. - -# Cleaning up orphaned routes + +## Cleaning up orphaned routes If the `minikube tunnel` shuts down in an unclean way, it might leave a network route around. -This case the ~/.minikube/tunnels.json file will contain an entry for that tunnel. -To cleanup orphaned routes, run: -```` +This case the ~/.minikube/tunnels.json file will contain an entry for that tunnel. +To cleanup orphaned routes, run: + +````shell minikube tunnel --cleanup ```` -# (Advanced) Running tunnel as root to avoid entering password multiple times +## (Advanced) Running tunnel as root to avoid entering password multiple times -`minikube tunnel` runs as a separate daemon, creates a network route on the host to the service CIDR of the cluster using the cluster's IP address as a gateway. -Adding a route requires root privileges for the user, and thus there are differences in how to run `minikube tunnel` depending on the OS. +`minikube tunnel` runs as a separate daemon, creates a network route on the host to the service CIDR of the cluster using the cluster's IP address as a gateway. +Adding a route requires root privileges for the user, and thus there are differences in how to run `minikube tunnel` depending on the OS. -Recommended way to use on Linux with KVM2 driver and MacOSX with Hyperkit driver: +Recommended way to use on Linux with KVM2 driver and MacOSX with Hyperkit driver: `sudo -E minikube tunnel` -Using VirtualBox on Windows, Mac and Linux _both_ `minikube start` and `minikube tunnel` needs to be started from the same Administrator user session otherwise [VBoxManage can't recognize the created VM](https://forums.virtualbox.org/viewtopic.php?f=6&t=81551). - +Using VirtualBox on Windows, Mac and Linux _both_ `minikube start` and `minikube tunnel` needs to be started from the same Administrator user session otherwise [VBoxManage can't recognize the created VM](https://forums.virtualbox.org/viewtopic.php?f=6&t=81551). diff --git a/docs/openid_connect_auth.md b/docs/openid_connect_auth.md index 27345e0a61aa550021e7b2bf90a5e33abcf05fcc..aeb016c6a511e89d545a57240c5ff51c555b4c01 100644 --- a/docs/openid_connect_auth.md +++ b/docs/openid_connect_auth.md @@ -2,8 +2,7 @@ Minikube `kube-apiserver` can be configured to support OpenID Connect Authentication. -Read more about OpenID Connect Authentication for Kubernetes here: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens - +Read more about OpenID Connect Authentication for Kubernetes here: ## Configuring the API Server @@ -21,7 +20,7 @@ minikube start \ ## Configuring kubectl -You can use the kubectl `oidc` authenticator to create a kubeconfig as shown in the Kubernetes docs: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#option-1-oidc-authenticator +You can use the kubectl `oidc` authenticator to create a kubeconfig as shown in the Kubernetes docs: `minikube start` already creates a kubeconfig that includes a `cluster`, in order to use it with your `oidc` authenticator kubeconfig, you can run: diff --git a/docs/persistent_volumes.md b/docs/persistent_volumes.md index ff811e7a5c73eff63f2b60dea3fcd38cb3ac0e28..0f734ee6169f408482d3a152315b987027c56480 100644 --- a/docs/persistent_volumes.md +++ b/docs/persistent_volumes.md @@ -1,7 +1,8 @@ -## Persistent Volumes +# Persistent Volumes + Minikube supports [PersistentVolumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) of type `hostPath` out of the box. These PersistentVolumes are mapped to a directory inside the running Minikube instance (usually a VM, unless you use `--vm-driver=none`). For more information on how this works, read the Dynamic Provisioning section below. -### A note on mounts, persistence, and Minikube hosts +## A note on mounts, persistence, and Minikube hosts Minikube is configured to persist files stored under the following directories, which are made in the Minikube VM (or on your localhost if running on bare metal). You may lose data from other directories on reboots. diff --git a/docs/reusing_the_docker_daemon.md b/docs/reusing_the_docker_daemon.md index 1d39b02cf0f10e9710e883ab8e5218d8129a4b44..53aeb730353b2c88a5f7ee0cec8e887c47ee0f67 100644 --- a/docs/reusing_the_docker_daemon.md +++ b/docs/reusing_the_docker_daemon.md @@ -1,4 +1,4 @@ -### Reusing the Docker daemon +# Reusing the Docker daemon When using a single VM of Kubernetes it's really handy to reuse the Docker daemon inside the VM; as this means you don't have to build on your host machine and push the image into a docker registry - you can just build inside the same docker daemon as minikube which speeds up local experiments. @@ -7,7 +7,9 @@ To be able to work with the docker daemon on your mac/linux host use the docker- ```shell eval $(minikube docker-env) ``` + You should now be able to use docker on the command line on your host mac/linux machine talking to the docker daemon inside the minikube VM: + ```shell docker ps ``` diff --git a/docs/tunnel.md b/docs/tunnel.md index 30aae4c99308da3ee14baa31afca87372c9d1124..ba83e7bf82013c56e3957606f179c97ef57490d1 100644 --- a/docs/tunnel.md +++ b/docs/tunnel.md @@ -57,7 +57,7 @@ and deleted via: sudo ip route delete 10.0.0.0/8 ``` -The routing table can be queried with `netstat -nr -f inet` +The routing table can be queried with `netstat -nr -f inet` ### OSX @@ -87,41 +87,41 @@ route ADD 10.0.0.0 MASK 255.0.0.0 and deleted via: ```shell -route DELETE 10.0.0.0 +route DELETE 10.0.0.0 ``` The routing table can be queried with `route print -4` -### Handling unclean shutdowns +### Handling unclean shutdowns + +Unclean shutdowns of the tunnel process can result in partially executed cleanup process, leaving network routes in the routing table. +We will keep track of the routes created by each tunnel in a centralized location in the main minikube config directory. +This list serves as a registry for tunnels containing information about -Unclean shutdowns of the tunnel process can result in partially executed cleanup process, leaving network routes in the routing table. -We will keep track of the routes created by each tunnel in a centralized location in the main minikube config directory. -This list serves as a registry for tunnels containing information about -- machine profile -- process ID +- machine profile +- process ID - and the route that was created -The cleanup command cleans the routes from both the routing table and the registry for tunnels that are not running: - -``` +The cleanup command cleans the routes from both the routing table and the registry for tunnels that are not running: + +```shell minikube tunnel --cleanup ``` -Updating the tunnel registry and the routing table is an atomic transaction: +Updating the tunnel registry and the routing table is an atomic transaction: -- create route in the routing table + create registry entry if both are successful, otherwise rollback -- delete route in the routing table + remove registry entry if both are successful, otherwise rollback +- create route in the routing table + create registry entry if both are successful, otherwise rollback +- delete route in the routing table + remove registry entry if both are successful, otherwise rollback -*Note*: because we don't support currently real multi cluster setup (due to overlapping CIDRs), the handling of running/not-running processes is not strictly required however it is forward looking. +*Note*: because we don't support currently real multi cluster setup (due to overlapping CIDRs), the handling of running/not-running processes is not strictly required however it is forward looking. -### Handling routing table conflicts +### Handling routing table conflicts -A routing table conflict happens when a destination CIDR of the route required by the tunnel overlaps with an existing route. -Minikube tunnel will warn the user if this happens and should not create the rule. +A routing table conflict happens when a destination CIDR of the route required by the tunnel overlaps with an existing route. +Minikube tunnel will warn the user if this happens and should not create the rule. There should not be any automated removal of conflicting routes. -*Note*: If the user removes the minikube config directory, this might leave conflicting rules in the network routing table that will have to be cleaned up manually. - +*Note*: If the user removes the minikube config directory, this might leave conflicting rules in the network routing table that will have to be cleaned up manually. ## Load Balancer Controller @@ -138,7 +138,7 @@ sleep Note that the Minikube ClusterIP can change over time (during system reboots) and this loop should also handle reconciliation of those changes. -## Handling multiple clusters +## Handling multiple clusters -Multiple clusters are currently not supported due to our inability to specify ServiceCIDR. +Multiple clusters are currently not supported due to our inability to specify ServiceCIDR. This causes conflicting routes having the same destination CIDR. diff --git a/docs/vmdriver-none.md b/docs/vmdriver-none.md index 5ea0bfa6e27568304757910a081b94015648059c..0aeec11374d44d9362bb68074223d30ed61da6a6 100644 --- a/docs/vmdriver-none.md +++ b/docs/vmdriver-none.md @@ -14,7 +14,7 @@ The `none` driver supports releases of Debian, Ubuntu, and Fedora that are less Most continuous integration environments are already running inside a VM, and may not supported nested virtualization. The `none` driver was designed for this use case. Here is an example, that runs minikube from a non-root user, and ensures that the latest stable kubectl is installed: -``` +```shell curl -Lo minikube \ https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \ && sudo install minikube /usr/local/bin/