提交 8e6dbf80 编写于 作者: A Anders F Björklund

Run markdownlint on all the md files in docs

上级 fbd3da22
{
"no-inline-html": false,
"no-trailing-punctuation": false,
"blanks-around-fences": false,
"commands-show-output": false,
"ul-style": false,
"line_length": false
"line-length": false
}
## Advanced Topics and Tutorials
# Advanced Topics and Tutorials
### Cluster Configuration
## Cluster Configuration
* **Alternative Runtimes** ([alternative_runtimes.md](alternative_runtimes.md)): How to run minikube with rkt as the container runtime
......
## Accessing Host Resources From Inside A Pod
### When you have a VirtualBox driver
# Accessing Host Resources From Inside A Pod
## When you have a VirtualBox driver
In order to access host resources from inside a pod, run the following command to determine the host IP you can use:
```shell
ip addr
```
......
## Add-ons
# Add-ons
Minikube has a set of built in addons that can be used enabled, disabled, and opened inside of the local k8s environment. Below is an example of this functionality for the `heapster` addon:
```shell
$ minikube addons list
- registry: disabled
......@@ -26,6 +27,7 @@ Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...
Created new window in existing browser session.
```
The currently supported addons include:
* [Kubernetes Dashboard](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard)
......
### Using rkt container engine
# Alternative runtimes
## Using rkt container engine
To use [rkt](https://github.com/coreos/rkt) as the container runtime run:
......@@ -6,8 +8,7 @@ To use [rkt](https://github.com/coreos/rkt) as the container runtime run:
$ minikube start --container-runtime=rkt
```
### Using CRI-O
## Using CRI-O
To use [CRI-O](https://github.com/kubernetes-sigs/cri-o) as the container runtime, run:
......@@ -27,7 +28,7 @@ $ minikube start --container-runtime=cri-o \
--extra-config=kubelet.image-service-endpoint=unix:///var/run/crio/crio.sock
```
### Using containerd
## Using containerd
To use [containerd](https://github.com/containerd/containerd) as the container runtime, run:
......
## Caching Images
# Caching Images
Minikube supports caching non-minikube images using the `minikube cache` command. Images can be added to the cache by running `minikube cache add <img>`, and deleted by running `minikube cache delete <img>`.
......
## Configuring Kubernetes
# Configuring Kubernetes
Minikube has a "configurator" feature that allows users to configure the Kubernetes components with arbitrary values.
To use this feature, you can use the `--extra-config` flag on the `minikube start` command.
This flag is repeated, so you can pass it several times with several different values to set multiple options.
### Kubeadm bootstrapper
## Kubeadm bootstrapper
The kubeadm bootstrapper can be configured by the `--extra-config` flag on the `minikube start` command. It takes a string of the form `component.key=value` where `component` is one of the strings
......
## Contributing
# Contributing
* **New contributors** ([contributors.md](https://github.com/kubernetes/minikube/blob/master/CONTRIBUTING.md)): Process for new contributors, CLA instructions
* **Roadmap** ([roadmap.md](roadmap.md)): The roadmap for future minikube development
## New Features and Dependencies
* **Adding a dependency** ([adding_a_dependency.md](adding_a_dependency.md)): How to add or update vendored code
* **Adding a new addon** ([adding_an_addon.md](adding_an_addon.md)): How to add a new addon to minikube for `minikube addons`
......@@ -12,6 +13,7 @@
* **Adding a new driver** ([adding_driver.md](adding_driver.md)): How to add a new driver to minikube for `minikube create --vm-driver=<driver>`
## Building and Releasing
* **Build Guide** ([build_guide.md](build_guide.md)): How to build minikube from source
* **ISO Build Guide** ([minikube_iso.md](minikube_iso.md)): How to build and hack on the ISO image that minikube uses
......
#### Adding a New Dependency
# Adding a New Dependency
Minikube uses `dep` to manage vendored dependencies.
See the `dep` [documentation](https://golang.github.io/dep/docs/introduction.html) for installation and usage instructions.
......
#### Adding a New Addon
# Adding a New Addon
To add a new addon to minikube the following steps are required:
* For the new addon's .yaml file(s):
......@@ -15,12 +16,12 @@ To add a new addon to minikube the following steps are required:
var settings = []Setting{
...,
// add other addon setting
{
name: "efk",
set: SetBool,
validations: []setFn{IsValidAddon},
callbacks: []setFn{EnableOrDisableAddon},
},
{
name: "efk",
set: SetBool,
validations: []setFn{IsValidAddon},
callbacks: []setFn{EnableOrDisableAddon},
},
}
```
......@@ -32,22 +33,22 @@ To add a new addon to minikube the following steps are required:
...,
// add other addon asset
"efk": NewAddon([]*BinDataAsset{
NewBinDataAsset(
"deploy/addons/efk/efk-configmap.yaml",
constants.AddonsPath,
"efk-configmap.yaml",
"0640"),
NewBinDataAsset(
"deploy/addons/efk/efk-rc.yaml",
constants.AddonsPath,
"efk-rc.yaml",
"0640"),
NewBinDataAsset(
"deploy/addons/efk/efk-svc.yaml",
constants.AddonsPath,
"efk-svc.yaml",
"0640"),
}, false, "efk"),
NewBinDataAsset(
"deploy/addons/efk/efk-configmap.yaml",
constants.AddonsPath,
"efk-configmap.yaml",
"0640"),
NewBinDataAsset(
"deploy/addons/efk/efk-rc.yaml",
constants.AddonsPath,
"efk-rc.yaml",
"0640"),
NewBinDataAsset(
"deploy/addons/efk/efk-svc.yaml",
constants.AddonsPath,
"efk-svc.yaml",
"0640"),
}, false, "efk"),
}
```
......
# Adding new driver (Deprecated)
New drivers should be added into https://github.com/machine-drivers
New drivers should be added into <https://github.com/machine-drivers>
Minikube relies on docker machine drivers to manage machines. This document talks about how to
add an existing docker machine driver into minikube registry, so that minikube can use the driver
......@@ -26,7 +26,7 @@ Registry is what minikube uses to register all the supported drivers. The driver
their drivers in registry, and minikube runtime will look at the registry to find a driver and use the
driver metadata to determine what workflow to apply while those drivers are being used.
The godoc of registry is available here: https://godoc.org/k8s.io/minikube/pkg/minikube/registry
The godoc of registry is available here: <https://godoc.org/k8s.io/minikube/pkg/minikube/registry>
[DriverDef](https://godoc.org/k8s.io/minikube/pkg/minikube/registry#DriverDef) is the main
struct to define a driver metadata. Essentially, you need to define 4 things at most, which is
......@@ -52,33 +52,33 @@ All drivers are located in `k8s.io/minikube/pkg/minikube/drivers`. Take `vmwaref
package vmwarefusion
import (
"github.com/docker/machine/drivers/vmwarefusion"
"github.com/docker/machine/libmachine/drivers"
cfg "k8s.io/minikube/pkg/minikube/config"
"k8s.io/minikube/pkg/minikube/constants"
"k8s.io/minikube/pkg/minikube/registry"
"github.com/docker/machine/drivers/vmwarefusion"
"github.com/docker/machine/libmachine/drivers"
cfg "k8s.io/minikube/pkg/minikube/config"
"k8s.io/minikube/pkg/minikube/constants"
"k8s.io/minikube/pkg/minikube/registry"
)
func init() {
registry.Register(registry.DriverDef{
Name: "vmwarefusion",
Builtin: true,
ConfigCreator: createVMwareFusionHost,
DriverCreator: func() drivers.Driver {
return vmwarefusion.NewDriver("", "")
},
})
registry.Register(registry.DriverDef{
Name: "vmwarefusion",
Builtin: true,
ConfigCreator: createVMwareFusionHost,
DriverCreator: func() drivers.Driver {
return vmwarefusion.NewDriver("", "")
},
})
}
func createVMwareFusionHost(config cfg.MachineConfig) interface{} {
d := vmwarefusion.NewDriver(cfg.GetMachineName(), constants.GetMinipath()).(*vmwarefusion.Driver)
d.Boot2DockerURL = config.Downloader.GetISOFileURI(config.MinikubeISO)
d.Memory = config.Memory
d.CPU = config.CPUs
d.DiskSize = config.DiskSize
d.SSHPort = 22
d.ISO = d.ResolveStorePath("boot2docker.iso")
return d
d := vmwarefusion.NewDriver(cfg.GetMachineName(), constants.GetMinipath()).(*vmwarefusion.Driver)
d.Boot2DockerURL = config.Downloader.GetISOFileURI(config.MinikubeISO)
d.Memory = config.Memory
d.CPU = config.CPUs
d.DiskSize = config.DiskSize
d.SSHPort = 22
d.ISO = d.ResolveStorePath("boot2docker.iso")
return d
}
```
......@@ -98,4 +98,3 @@ In summary, the process includes the following steps:
2. Add import in `pkg/minikube/cluster/default_drivers.go`
Any Questions: please ping your friend [@anfernee](https://github.com/anfernee)
### Build Requirements
# Build Guide
## Build Requirements
* A recent Go distribution (>=1.12)
* If you're not on Linux, you'll need a Docker installation
* minikube requires at least 4GB of RAM to compile, which can be problematic when using docker-machine
#### Prerequisites for different GNU/Linux distributions
### Prerequisites for different GNU/Linux distributions
##### Fedora
On Fedora you need to install _glibc-static_
#### Fedora
On Fedora you need to install _glibc-static_
```shell
$ sudo dnf install -y glibc-static
```
### Building from Source
Clone minikube into your go path under `$GOPATH/src/k8s.io`
Clone minikube into your go path under `$GOPATH/src/k8s.io`
```shell
$ git clone https://github.com/kubernetes/minikube.git $GOPATH/src/k8s.io/minikube
$ cd $GOPATH/src/k8s.io/minikube
......@@ -25,20 +28,23 @@ Note: Make sure that you uninstall any previous versions of minikube before buil
from the source.
### Building from Source in Docker (using Debian stretch image with golang)
Clone minikube:
```shell
$ git clone https://github.com/kubernetes/minikube.git
```
Build (cross compile for linux / OS X and Windows) using make:
```shell
$ cd minikube
$ docker run --rm -v "$PWD":/go/src/k8s.io/minikube -w /go/src/k8s.io/minikube golang:stretch make cross
```
Check "out" directory:
```shell
$ ls out/
docker-machine-driver-hyperkit.d minikube minikube.d test.d
docker-machine-driver-kvm2.d minikube-linux-amd64 storage-provisioner.d
docker-machine-driver-hyperkit.d minikube minikube.d test.d
docker-machine-driver-kvm2.d minikube-linux-amd64 storage-provisioner.d
```
### Run Instructions
......@@ -77,11 +83,14 @@ You can run these against minikube by following these steps:
* Run `make quick-release` in the k8s repo.
* Start up a minikube cluster with: `minikube start`.
* Set following two environment variables:
```shell
export KUBECONFIG=$HOME/.kube/config
export KUBERNETES_CONFORMANCE_TEST=y
```
* Run the tests (from the k8s repo):
```shell
go run hack/e2e.go -v --test --test_args="--ginkgo.focus=\[Conformance\]" --check-version-skew=false
```
......
### CI Builds
# CI Builds
We publish CI builds of minikube, built at every Pull Request. Builds are available at (substitute in the relevant PR number):
- https://storage.googleapis.com/minikube-builds/PR_NUMBER/minikube-darwin-amd64
- https://storage.googleapis.com/minikube-builds/PR_NUMBER/minikube-linux-amd64
- https://storage.googleapis.com/minikube-builds/PR_NUMBER/minikube-windows-amd64.exe
- <https://storage.googleapis.com/minikube-builds/PR_NUMBER/minikube-darwin-amd64>
- <https://storage.googleapis.com/minikube-builds/PR_NUMBER/minikube-linux-amd64>
- <https://storage.googleapis.com/minikube-builds/PR_NUMBER/minikube-windows-amd64.exe>
We also publish CI builds of minikube-iso, built at every Pull Request that touches deploy/iso/minikube-iso. Builds are available at:
- https://storage.googleapis.com/minikube-builds/PR_NUMBER/minikube-testing.iso
- <https://storage.googleapis.com/minikube-builds/PR_NUMBER/minikube-testing.iso>
## minikube ISO image
# minikube ISO image
This includes the configuration for an alternative bootable ISO image meant to be used in conjunction with minikube.
It includes:
- systemd as the init system
- rkt
- docker
......@@ -13,9 +14,10 @@ It includes:
### Requirements
* Linux
```shell
sudo apt-get install build-essential gnupg2 p7zip-full git wget cpio python \
unzip bc gcc-multilib automake libtool locales
unzip bc gcc-multilib automake libtool locales
```
Either import your private key or generate a sign-only key using `gpg2 --gen-key`.
......@@ -69,7 +71,6 @@ $ git status
### Saving buildroot/kernel configuration changes
To make any kernel configuration changes and save them, execute:
```shell
......
......@@ -22,5 +22,4 @@ Here are some specific minikube features that align with our goal:
## Non-Goals
* Simplifying Kubernetes production deployment experience
* Supporting all possible deployment configurations of Kubernetes like various types of storage, networking, etc.
* Supporting all possible deployment configurations of Kubernetes like various types of storage, networking, etc.
......@@ -7,16 +7,16 @@
## Build a new ISO
Major releases always get a new ISO. Minor bugfixes may or may not require it: check for changes in the `deploy/iso` folder.
Major releases always get a new ISO. Minor bugfixes may or may not require it: check for changes in the `deploy/iso` folder.
Note: you can build the ISO using the `hack/jenkins/build_iso.sh` script locally.
* navigate to the minikube ISO jenkins job
* Ensure that you are logged in (top right)
* Click "▶️ Build with Parameters" (left)
* For `ISO_VERSION`, type in the intended release version (same as the minikube binary's version)
* For `ISO_BUCKET`, type in `minikube/iso`
* Click *Build*
* navigate to the minikube ISO jenkins job
* Ensure that you are logged in (top right)
* Click "▶️ Build with Parameters" (left)
* For `ISO_VERSION`, type in the intended release version (same as the minikube binary's version)
* For `ISO_BUCKET`, type in `minikube/iso`
* Click *Build*
The build will take roughly 50 minutes.
......@@ -47,7 +47,7 @@ env BUILD_IN_DOCKER=y make cross checksum
Once submitted, HEAD will use the new ISO. Please pay attention to test failures, as this is our integration test across platforms. If there are known acceptable failures, please add a PR comment linking to the appropriate issue.
## Update Release Notes
## Update Release Notes
Run the following script to update the release notes:
......@@ -59,11 +59,11 @@ Merge the output into CHANGELOG.md. See [PR#3175](https://github.com/kubernetes/
## Tag the Release
NOTE: Confirm that all release-related PR's have been submitted before doing this step.
NOTE: Confirm that all release-related PR's have been submitted before doing this step.
Do this in a direct clone of the upstream kubernetes/minikube repository (not your fork!):
```
```shell
version=<new version number>
git fetch
git checkout master
......@@ -76,12 +76,12 @@ git push origin v$version
This step uses the git tag to publish new binaries to GCS and create a github release:
* navigate to the minikube "Release" jenkins job
* Ensure that you are logged in (top right)
* Click "▶️ Build with Parameters" (left)
* `VERSION_MAJOR`, `VERSION_MINOR`, and `VERSION_BUILD` should reflect the values in your Makefile
* For `ISO_SHA256`, run: `gsutil cat gs://minikube/iso/minikube-v<version>.iso.sha256`
* Click *Build*
* navigate to the minikube "Release" jenkins job
* Ensure that you are logged in (top right)
* Click "▶️ Build with Parameters" (left)
* `VERSION_MAJOR`, `VERSION_MINOR`, and `VERSION_BUILD` should reflect the values in your Makefile
* For `ISO_SHA256`, run: `gsutil cat gs://minikube/iso/minikube-v<version>.iso.sha256`
* Click *Build*
## Check releases.json
......@@ -95,8 +95,8 @@ These are downstream packages that are being maintained by others and how to upg
| Package Manager | URL | TODO |
| --- | --- | --- |
| Arch Linux AUR | https://aur.archlinux.org/packages/minikube/ | "Flag as package out-of-date"
| Brew Cask | https://github.com/Homebrew/homebrew-cask/blob/master/Casks/minikube.rb | The release job creates a new PR in [Homebrew/homebrew-cask](https://github.com/Homebrew/homebrew-cask) with an updated version and SHA256, double check that it's created.
| Arch Linux AUR | <https://aur.archlinux.org/packages/minikube/> | "Flag as package out-of-date"
| Brew Cask | <https://github.com/Homebrew/homebrew-cask/blob/master/Casks/minikube.rb> | The release job creates a new PR in [Homebrew/homebrew-cask](https://github.com/Homebrew/homebrew-cask) with an updated version and SHA256, double check that it's created.
## Verification
......@@ -104,7 +104,7 @@ Verify release checksums by running`make check-release`
## Update docs
If there are major changes, please send a PR to update https://kubernetes.io/docs/setup/minikube/
If there are major changes, please send a PR to update <https://kubernetes.io/docs/setup/minikube/>
## Announce!
......
### Debugging Issues With Minikube
# Debugging Issues With Minikube
To debug issues with minikube (not *Kubernetes* but **minikube** itself), you can use the `-v` flag to see debug level info. The specified values for `-v` will do the following (the values are all encompassing in that higher values will give you all lower value outputs as well):
* `--v=0` will output **INFO** level logs
* `--v=1` will output **WARNING** level logs
* `--v=2` will output **ERROR** level logs
......@@ -11,6 +13,5 @@ Example:
If you need to access additional tools for debugging, minikube also includes the [CoreOS toolbox](https://github.com/coreos/toolbox)
You can ssh into the toolbox and access these additional commands using:
`minikube ssh toolbox`
......@@ -14,7 +14,7 @@ the host PATH:
* [HyperV](#hyperv-driver)
* [VMware](#vmware-unified-driver)
#### KVM2 driver
## KVM2 driver
To install the KVM2 driver, first install and configure the prereqs:
......@@ -36,14 +36,14 @@ sudo apt install libvirt-bin libvirt-daemon-system qemu-kvm
sudo yum install libvirt-daemon-kvm qemu-kvm
```
Enable,start, and verify the libvirtd service has started.
Enable,start, and verify the libvirtd service has started.
```shell
sudo systemctl enable libvirtd.service
sudo systemctl start libvirtd.service
sudo systemctl status libvirtd.service
```
Then you will need to add yourself to libvirt group (older distributions may use libvirtd instead)
`sudo usermod -a -G libvirt $(whoami)`
......@@ -59,8 +59,7 @@ curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-
&& sudo install docker-machine-driver-kvm2 /usr/local/bin/
```
NOTE: Ubuntu users on a release older than 18.04, or anyone experiencing [#3206: Error creating new host: dial tcp: missing address.](https://github.com/kubernetes/minikube/issues/3206) you will need to build your own driver until [#3689](https://github.com/kubernetes/minikube/issues/3689) is resolved. Building this binary will require [Go v1.11](https://golang.org/dl/) or newer to be installed.
NOTE: Ubuntu users on a release older than 18.04, or anyone experiencing [#3206: Error creating new host: dial tcp: missing address.](https://github.com/kubernetes/minikube/issues/3206) you will need to build your own driver until [#3689](https://github.com/kubernetes/minikube/issues/3689) is resolved. Building this binary will require [Go v1.11](https://golang.org/dl/) or newer to be installed.
```shell
sudo apt install libvirt-dev
......@@ -90,18 +89,17 @@ and run minikube as usual:
minikube start
```
#### Hyperkit driver
## Hyperkit driver
The Hyperkit driver will eventually replace the existing xhyve driver.
It is built from the minikube source tree, and uses [moby/hyperkit](http://github.com/moby/hyperkit) as a Go library.
To install the hyperkit driver via brew:
```shell
brew install docker-machine-driver-hyperkit
# docker-machine-driver-hyperkit need root owner and uid
# docker-machine-driver-hyperkit need root owner and uid
sudo chown root:wheel /usr/local/opt/docker-machine-driver-hyperkit/bin/docker-machine-driver-hyperkit
sudo chmod u+s /usr/local/opt/docker-machine-driver-hyperkit/bin/docker-machine-driver-hyperkit
```
......@@ -139,7 +137,7 @@ and run minikube as usual:
minikube start
```
#### HyperV driver
## HyperV driver
Hyper-v users may need to create a new external network switch as described [here](https://docs.docker.com/machine/drivers/hyper-v/). This step may prevent a problem in which `minikube start` hangs indefinitely, unable to ssh into the minikube virtual machine. In this add, add the `--hyperv-virtual-switch=switch-name` argument to the `minikube start` command.
......@@ -150,6 +148,7 @@ To use the driver:
```shell
minikube start --vm-driver hyperv --hyperv-virtual-switch=switch-name
```
or, to use hyperv as a default driver:
```shell
......@@ -162,12 +161,12 @@ and run minikube as usual:
minikube start
```
#### VMware unified driver
## VMware unified driver
The VMware unified driver will eventually replace the existing vmwarefusion driver.
The new unified driver supports both VMware Fusion (on macOS) and VMware Workstation (on Linux and Windows)
To install the vmware unified driver, head over at https://github.com/machine-drivers/docker-machine-driver-vmware/releases and download the release for your operating system.
To install the vmware unified driver, head over at <https://github.com/machine-drivers/docker-machine-driver-vmware/releases> and download the release for your operating system.
The driver must be:
......@@ -201,4 +200,3 @@ and run minikube as usual:
```shell
minikube start
```
......@@ -3,7 +3,7 @@
## Config option variables
minikube supports passing environment variables instead of flags for every value listed in `minikube config list`. This is done by passing an environment variable with the prefix `MINIKUBE_`.
minikube supports passing environment variables instead of flags for every value listed in `minikube config list`. This is done by passing an environment variable with the prefix `MINIKUBE_`.
For example the `minikube start --iso-url="$ISO_URL"` flag can also be set by setting the `MINIKUBE_ISO_URL="$ISO_URL"` environment variable.
......@@ -46,7 +46,7 @@ MINIKUBE_ENABLE_PROFILING=1 minikube start
Output:
```
``` text
2017/01/09 13:18:00 profile: cpu profiling enabled, /tmp/profile933201292/cpu.pprof
Starting local Kubernetes cluster...
Kubectl is now configured to use the cluster.
......
......@@ -32,35 +32,38 @@ host to the minikube VM. Doing so has a few prerequisites:
- Once you reboot the system after doing the above, you should be ready to use
GPUs with kvm2. Run the following command to start minikube:
```
```shell
minikube start --vm-driver kvm2 --gpu
```
This command will check if all the above conditions are satisfied and
passthrough spare GPUs found on the host to the VM.
If this succeeded, run the following commands:
```
```shell
minikube addons enable nvidia-gpu-device-plugin
minikube addons enable nvidia-driver-installer
```
This will install the NVIDIA driver (that works for GeForce/Quadro cards)
on the VM.
- If everything succeeded, you should be able to see `nvidia.com/gpu` in the
capacity:
```
```shell
kubectl get nodes -ojson | jq .items[].status.capacity
```
### Where can I learn more about GPU passthrough?
See the excellent documentation at
https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF
<https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF>
### Why are so many manual steps required to use GPUs with kvm2 on minikube?
These steps require elevated privileges which minikube doesn't run with and they
are disruptive to the host, so we decided to not do them automatically.
## Using NVIDIA GPU on minikube on Linux with `--vm-driver=none`
NOTE: This approach used to expose GPUs here is different than the approach used
......@@ -70,26 +73,28 @@ to expose GPUs with `--vm-driver=kvm2`. Please don't mix these instructions.
- Install the nvidia driver, nvidia-docker and configure docker with nvidia as
the default runtime. See instructions at
https://github.com/NVIDIA/nvidia-docker
<https://github.com/NVIDIA/nvidia-docker>
- Start minikube:
```
```shell
minikube start --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost
```
- Install NVIDIA's device plugin:
```
```shell
kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.10/nvidia-device-plugin.yml
```
## Why does minikube not support NVIDIA GPUs on macOS?
VM drivers supported by minikube for macOS doesn't support GPU passthrough:
- [mist64/xhyve#108](https://github.com/mist64/xhyve/issues/108)
- [moby/hyperkit#159](https://github.com/moby/hyperkit/issues/159)
- [VirtualBox docs](http://www.virtualbox.org/manual/ch09.html#pcipassthrough)
Also:
- For quite a while, all Mac hardware (both laptops and desktops) have come with
Intel or AMD GPUs (and not with NVIDIA GPUs). Recently, Apple added [support
for eGPUs](https://support.apple.com/en-us/HT208544), but even then all the
......@@ -98,8 +103,8 @@ Also:
- nvidia-docker [doesn't support
macOS](https://github.com/NVIDIA/nvidia-docker/issues/101) either.
## Why does minikube not support NVIDIA GPUs on Windows?
minikube supports Windows host through Hyper-V or VirtualBox.
- VirtualBox doesn't support PCI passthrough for [Windows
......
# Mounting Host Folders
## Mounting Host Folders
`minikube mount /path/to/dir/to/mount:/vm-mount-path` is the recommended way to mount directories into minikube so that they can be used in your local Kubernetes cluster. The command works on all supported platforms. Below is an example workflow for using `minikube mount`:
```shell
......
## Using Minikube with an HTTP Proxy
# Using Minikube with an HTTP Proxy
minikube requires access to the internet via HTTP, HTTPS, and DNS protocols. If a HTTP proxy is required to access the internet, you may need to pass the proxy connection information to both minikube and Docker using environment variables:
......@@ -17,7 +17,7 @@ One important note: If NO_PROXY is required by non-Kubernetes applications, such
### macOS and Linux
```
```shell
export HTTP_PROXY=http://<proxy hostname:port>
export HTTPS_PROXY=https://<proxy hostname:port>
export NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.1/24
......@@ -30,7 +30,7 @@ To make the exported variables permanent, consider adding the declarations to ~/
### Windows
```
```shell
set HTTP_PROXY=http://<proxy hostname:port>
set HTTPS_PROXY=https://<proxy hostname:port>
set NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.1/24
......@@ -45,7 +45,7 @@ To set these environment variables permanently, consider adding these to your [s
### unable to cache ISO... connection refused
```
```text
Unable to start VM: unable to cache ISO: https://storage.googleapis.com/minikube/iso/minikube.iso:
failed to download: failed to download to temp file: download failed: 5 error(s) occurred:
......@@ -57,8 +57,8 @@ This error indicates that the host:port combination defined by HTTPS_PROXY or HT
## Unable to pull images..Client.Timeout exceeded while awaiting headers
```
Unable to pull images, which may be OK:
```text
Unable to pull images, which may be OK:
failed to pull image "k8s.gcr.io/kube-apiserver:v1.13.3": output: Error response from daemon:
Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection
......@@ -69,7 +69,7 @@ This error indicates that the container runtime running within the VM does not h
## x509: certificate signed by unknown authority
```
```text
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.3:
output: Error response from daemon:
Get https://k8s.gcr.io/v2/: x509: certificate signed by unknown authority
......@@ -83,7 +83,6 @@ Ask your IT department for the appropriate PEM file, and add it to:
Then run `minikube delete` and `minikube start`.
## Additional Information
- [Configure Docker to use a proxy server](https://docs.docker.com/network/proxy/)
* [Configure Docker to use a proxy server](https://docs.docker.com/network/proxy/)
## Enabling Docker Insecure Registry
# Enabling Docker Insecure Registry
Minikube allows users to configure the docker engine's `--insecure-registry` flag. You can use the `--insecure-registry` flag on the
`minikube start` command to enable insecure communication between the docker engine and registries listening to requests from the CIDR range.
......@@ -8,7 +8,9 @@ with TLS certificates. Because the default service cluster IP is known to be ava
deployed inside the cluster by creating the cluster with `minikube start --insecure-registry "10.0.0.0/24"`.
## Private Container Registries
**GCR/ECR/Docker**: Minikube has an addon, `registry-creds` which maps credentials into Minikube to support pulling from Google Container Registry (GCR), Amazon's EC2 Container Registry (ECR), and Private Docker registries. You will need to run `minikube addons configure registry-creds` and `minikube addons enable registry-creds` to get up and running. An example of this is below:
```shell
$ minikube addons configure registry-creds
Do you want to enable AWS Elastic Container Registry? [y/n]: n
......
## Networking
# Networking
The minikube VM is exposed to the host system via a host-only IP address, that can be obtained with the `minikube ip` command.
Any services of type `NodePort` can be accessed over that IP address, on the NodePort.
......@@ -11,52 +11,52 @@ We also have a shortcut for fetching the minikube IP and a service's `NodePort`:
`minikube service --url $SERVICE`
### LoadBalancer emulation (`minikube tunnel`)
## LoadBalancer emulation (`minikube tunnel`)
Services of type `LoadBalancer` can be exposed via the `minikube tunnel` command.
Services of type `LoadBalancer` can be exposed via the `minikube tunnel` command.
````shell
minikube tunnel
````
````
Will output:
```
```text
out/minikube tunnel
Password: *****
Status:
Status:
machine: minikube
pid: 59088
route: 10.96.0.0/12 -> 192.168.99.101
minikube: Running
services: []
errors:
errors:
minikube: no errors
router: no errors
loadbalancer emulator: no errors
````
````
Tunnel might ask you for password for creating and deleting network routes.
# Cleaning up orphaned routes
## Cleaning up orphaned routes
If the `minikube tunnel` shuts down in an unclean way, it might leave a network route around.
This case the ~/.minikube/tunnels.json file will contain an entry for that tunnel.
To cleanup orphaned routes, run:
````
This case the ~/.minikube/tunnels.json file will contain an entry for that tunnel.
To cleanup orphaned routes, run:
````shell
minikube tunnel --cleanup
````
# (Advanced) Running tunnel as root to avoid entering password multiple times
## (Advanced) Running tunnel as root to avoid entering password multiple times
`minikube tunnel` runs as a separate daemon, creates a network route on the host to the service CIDR of the cluster using the cluster's IP address as a gateway.
Adding a route requires root privileges for the user, and thus there are differences in how to run `minikube tunnel` depending on the OS.
`minikube tunnel` runs as a separate daemon, creates a network route on the host to the service CIDR of the cluster using the cluster's IP address as a gateway.
Adding a route requires root privileges for the user, and thus there are differences in how to run `minikube tunnel` depending on the OS.
Recommended way to use on Linux with KVM2 driver and MacOSX with Hyperkit driver:
Recommended way to use on Linux with KVM2 driver and MacOSX with Hyperkit driver:
`sudo -E minikube tunnel`
Using VirtualBox on Windows, Mac and Linux _both_ `minikube start` and `minikube tunnel` needs to be started from the same Administrator user session otherwise [VBoxManage can't recognize the created VM](https://forums.virtualbox.org/viewtopic.php?f=6&t=81551).
Using VirtualBox on Windows, Mac and Linux _both_ `minikube start` and `minikube tunnel` needs to be started from the same Administrator user session otherwise [VBoxManage can't recognize the created VM](https://forums.virtualbox.org/viewtopic.php?f=6&t=81551).
......@@ -2,8 +2,7 @@
Minikube `kube-apiserver` can be configured to support OpenID Connect Authentication.
Read more about OpenID Connect Authentication for Kubernetes here: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens
Read more about OpenID Connect Authentication for Kubernetes here: <https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens>
## Configuring the API Server
......@@ -21,7 +20,7 @@ minikube start \
## Configuring kubectl
You can use the kubectl `oidc` authenticator to create a kubeconfig as shown in the Kubernetes docs: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#option-1-oidc-authenticator
You can use the kubectl `oidc` authenticator to create a kubeconfig as shown in the Kubernetes docs: <https://kubernetes.io/docs/reference/access-authn-authz/authentication/#option-1-oidc-authenticator>
`minikube start` already creates a kubeconfig that includes a `cluster`, in order to use it with your `oidc` authenticator kubeconfig, you can run:
......
## Persistent Volumes
# Persistent Volumes
Minikube supports [PersistentVolumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) of type `hostPath` out of the box. These PersistentVolumes are mapped to a directory inside the running Minikube instance (usually a VM, unless you use `--vm-driver=none`). For more information on how this works, read the Dynamic Provisioning section below.
### A note on mounts, persistence, and Minikube hosts
## A note on mounts, persistence, and Minikube hosts
Minikube is configured to persist files stored under the following directories, which are made in the Minikube VM (or on your localhost if running on bare metal). You may lose data from other directories on reboots.
......
### Reusing the Docker daemon
# Reusing the Docker daemon
When using a single VM of Kubernetes it's really handy to reuse the Docker daemon inside the VM; as this means you don't have to build on your host machine and push the image into a docker registry - you can just build inside the same docker daemon as minikube which speeds up local experiments.
......@@ -7,7 +7,9 @@ To be able to work with the docker daemon on your mac/linux host use the docker-
```shell
eval $(minikube docker-env)
```
You should now be able to use docker on the command line on your host mac/linux machine talking to the docker daemon inside the minikube VM:
```shell
docker ps
```
......
......@@ -57,7 +57,7 @@ and deleted via:
sudo ip route delete 10.0.0.0/8
```
The routing table can be queried with `netstat -nr -f inet`
The routing table can be queried with `netstat -nr -f inet`
### OSX
......@@ -87,41 +87,41 @@ route ADD 10.0.0.0 MASK 255.0.0.0 <minikube ip>
and deleted via:
```shell
route DELETE 10.0.0.0
route DELETE 10.0.0.0
```
The routing table can be queried with `route print -4`
### Handling unclean shutdowns
### Handling unclean shutdowns
Unclean shutdowns of the tunnel process can result in partially executed cleanup process, leaving network routes in the routing table.
We will keep track of the routes created by each tunnel in a centralized location in the main minikube config directory.
This list serves as a registry for tunnels containing information about
Unclean shutdowns of the tunnel process can result in partially executed cleanup process, leaving network routes in the routing table.
We will keep track of the routes created by each tunnel in a centralized location in the main minikube config directory.
This list serves as a registry for tunnels containing information about
- machine profile
- process ID
- machine profile
- process ID
- and the route that was created
The cleanup command cleans the routes from both the routing table and the registry for tunnels that are not running:
```
The cleanup command cleans the routes from both the routing table and the registry for tunnels that are not running:
```shell
minikube tunnel --cleanup
```
Updating the tunnel registry and the routing table is an atomic transaction:
Updating the tunnel registry and the routing table is an atomic transaction:
- create route in the routing table + create registry entry if both are successful, otherwise rollback
- delete route in the routing table + remove registry entry if both are successful, otherwise rollback
- create route in the routing table + create registry entry if both are successful, otherwise rollback
- delete route in the routing table + remove registry entry if both are successful, otherwise rollback
*Note*: because we don't support currently real multi cluster setup (due to overlapping CIDRs), the handling of running/not-running processes is not strictly required however it is forward looking.
*Note*: because we don't support currently real multi cluster setup (due to overlapping CIDRs), the handling of running/not-running processes is not strictly required however it is forward looking.
### Handling routing table conflicts
### Handling routing table conflicts
A routing table conflict happens when a destination CIDR of the route required by the tunnel overlaps with an existing route.
Minikube tunnel will warn the user if this happens and should not create the rule.
A routing table conflict happens when a destination CIDR of the route required by the tunnel overlaps with an existing route.
Minikube tunnel will warn the user if this happens and should not create the rule.
There should not be any automated removal of conflicting routes.
*Note*: If the user removes the minikube config directory, this might leave conflicting rules in the network routing table that will have to be cleaned up manually.
*Note*: If the user removes the minikube config directory, this might leave conflicting rules in the network routing table that will have to be cleaned up manually.
## Load Balancer Controller
......@@ -138,7 +138,7 @@ sleep
Note that the Minikube ClusterIP can change over time (during system reboots) and this loop should also handle reconciliation of those changes.
## Handling multiple clusters
## Handling multiple clusters
Multiple clusters are currently not supported due to our inability to specify ServiceCIDR.
Multiple clusters are currently not supported due to our inability to specify ServiceCIDR.
This causes conflicting routes having the same destination CIDR.
......@@ -14,7 +14,7 @@ The `none` driver supports releases of Debian, Ubuntu, and Fedora that are less
Most continuous integration environments are already running inside a VM, and may not supported nested virtualization. The `none` driver was designed for this use case. Here is an example, that runs minikube from a non-root user, and ensures that the latest stable kubectl is installed:
```
```shell
curl -Lo minikube \
https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
&& sudo install minikube /usr/local/bin/
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册