minikube start supports additional hyperkit specific flags:
minikube start supports additional hyperkit specific flags:
***\--hyperkit-vpnkit-sock**: Location of the VPNKit socket used for networking. If empty, disables Hyperkit VPNKitSock, if 'auto' uses Docker for Mac VPNKit connection, otherwise uses the specified VSoc
***`--hyperkit-vpnkit-sock`**: Location of the VPNKit socket used for networking. If empty, disables Hyperkit VPNKitSock, if 'auto' uses Docker for Mac VPNKit connection, otherwise uses the specified VSoc
***\--hyperkit-vsock-ports**: List of guest VSock ports that should be exposed as sockets on the host
***`--hyperkit-vsock-ports`**: List of guest VSock ports that should be exposed as sockets on the host
***\--nfs-share**: Local folders to share with Guest via NFS mounts
***`--nfs-share`**: Local folders to share with Guest via NFS mounts
***\--nfs-shares-root**: Where to root the NFS Shares (default "/nfsshares")
***`--nfs-shares-root`**: Where to root the NFS Shares (default "/nfsshares")
***\--uuid**: Provide VM UUID to restore MAC address
***`--uuid`**: Provide VM UUID to restore MAC address
The `minikube start` command supports 3 additional kvm specific flags:
The `minikube start` command supports 3 additional kvm specific flags:
***\--gpu**: Enable experimental NVIDIA GPU support in minikube
***`--gpu`**: Enable experimental NVIDIA GPU support in minikube
***\--hidden**: Hide the hypervisor signature from the guest in minikube
***`--hidden`**: Hide the hypervisor signature from the guest in minikube
***\--kvm-network**: The KVM network name
***`--kvm-network`**: The KVM network name
## Issues
## Issues
*`minikube` will repeatedly for root password if user is not in the correct `libvirt` group [#3467](https://github.com/kubernetes/minikube/issues/3467)
*`minikube` will repeatedly for the root password if user is not in the correct `libvirt` group [#3467](https://github.com/kubernetes/minikube/issues/3467)
*`Machine didn't return an IP after 120 seconds` when firewall prevents VM network access [#3566](https://github.com/kubernetes/minikube/issues/3566)
*`Machine didn't return an IP after 120 seconds` when firewall prevents VM network access [#3566](https://github.com/kubernetes/minikube/issues/3566)
*`unable to set user and group to '65534:992` when `dynamic ownership = 1` in `qemu.conf`[#4467](https://github.com/kubernetes/minikube/issues/4467)
*`unable to set user and group to '65534:992` when `dynamic ownership = 1` in `qemu.conf`[#4467](https://github.com/kubernetes/minikube/issues/4467)
* KVM VM's cannot be used simultaneously with VirtualBox [#4913](https://github.com/kubernetes/minikube/issues/4913)
* KVM VM's cannot be used simultaneously with VirtualBox [#4913](https://github.com/kubernetes/minikube/issues/4913)
A LoadBalancer service is the standard way to expose a service to the internet. With this method, each service gets it's own IP address.
A LoadBalancer service is the standard way to expose a service to the internet. With this method, each service gets it's own IP address.
## Using `minikube tunnel`
## Using `minikube tunnel`
Services of type `LoadBalancer` can be exposed via the `minikube tunnel` command. It will run until Ctrl-C is hit.
Services of type `LoadBalancer` can be exposed via the `minikube tunnel` command. It will run until Ctrl-C is hit.
...
@@ -33,26 +36,23 @@ Status:
...
@@ -33,26 +36,23 @@ Status:
loadbalancer emulator: no errors
loadbalancer emulator: no errors
```
```
Tunnel might ask you for password for creating and deleting network routes.
### DNS resolution
`minikube tunnel` runs as a separate daemon, creating a network route on the host to the service CIDR of the cluster using the cluster's IP address as a gateway. The tunnel command exposes the external IP directly to any program running on the host operating system.
### DNS resolution (experimental)
If you are on macOS, the tunnel command also allows DNS resolution for Kubernetes services from the host.
If you are on macOS, the tunnel command also allows DNS resolution for Kubernetes services from the host.
### Cleaning up orphaned routes
### Cleaning up orphaned routes
If the `minikube tunnel` shuts down in an unclean way, it might leave a network route around.
If the `minikube tunnel` shuts down in an abrupt manner, it may leave orphaned network routes on your system. If this happens, the ~/.minikube/tunnels.json file will contain an entry for that tunnel. To remove orphaned routes, run:
This case the ~/.minikube/tunnels.json file will contain an entry for that tunnel.
To cleanup orphaned routes, run:
````shell
````shell
minikube tunnel --cleanup
minikube tunnel --cleanup
````
````
### Avoid entering password multiple times
### Avoiding password prompts
`minikube tunnel` runs as a separate daemon, creates a network route on the host to the service CIDR of the cluster using the cluster's IP address as a gateway. Adding a route requires root privileges for the user, and thus there are differences in how to run `minikube tunnel` depending on the OS.
If you want to avoid entering the root password, consider setting NOPASSWD for "ip" and "route" commands:
Adding a route requires root privileges for the user, and thus there are differences in how to run `minikube tunnel` depending on the OS. If you want to avoid entering the root password, consider setting NOPASSWD for "ip" and "route" commands: