Kind是用于在本机运行K8S集群的工具,和Minikube不同,Kind创建的集群是多“节点”的,每个节点对应一个Containerd容器。
Kind的优势包括:
Kind包含以下组件:
依赖于Docker、Go 1.11+:
GO111MODULE="on" GOPROXY=https://goproxy.io go get sigs.k8s.io/kind@v0.5.1
使用最新节点镜像:
kind create cluster
节点镜像很大,请耐心等待下载完成。
从源码构建,需要先将K8S源码下载到$(go env GOPATH)/src/k8s.io/kubernetes 目录中,然后执行:
kind build node-image --image docker.gmem.cc/kindest/node:latest kind create cluster --image docker.gmem.cc/kindest/node:latest
kind delete cluster --name kind
通过下面的命令设置KUBECONFIG环境变量:
export KUBECONFIG="$(kind get kubeconfig-path)"
然后执行任意kubectl命令即可。
Kind集群牵涉到两个镜像:
默认使用的Base镜像的Dockerfile位于kind/images/base/Dockerfile,基础镜像是Ubuntu 19.10,我们可以在基础镜像中定制Containerd的缺省配置、Systemd服务配置、Circtl配置。
cd $GOPATH/src/sigs.k8s.io/kind/images/base docker build -t docker.gmem.cc/kindest/base:latest .
你可以从Kubernetes源码构建出自己的Node镜像,包含控制平面的全部组件:
# 签出K8S源码 cd $GOPATH/src/k8s.io git clone https://github.com/kubernetes/kubernetes.git # 如果你想验证某个PR,可以 git remote add gmem https://github.com/gmemcc/kubernetes.git git fetch --all git checkout ...
构建Kubernetes项目时,需要下载很多来自k8s.gcr.io的镜像,你可以:
server {
listen 443 ssl;
server_name k8s.gcr.io;
# 自签名证书
ssl_certificate /etc/nginx/ssl/gcr/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/gcr/privkey.pem;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
proxy_set_header X-Real-IP $remote_addr;
client_max_body_size 0;
chunked_transfer_encoding on;
# URL重写和转发规则
location /v2/ {
rewrite /v2/(.*) /v2/google_containers/$1 break;
proxy_pass https://gcr.azk8s.cn;
proxy_set_header Host gcr.azk8s.cn;
}
}
执行下面的命令构建Node镜像:
kind build node-image --base-image=docker.gmem.cc/kindest/base:latest \
--image=docker.gmem.cc/kindest/node:latest
创建集群时,你可以指定一个配置文件:kind create cluster --config kind-example-config.yaml ,本节介绍几个用例。
kind: Cluster apiVersion: kind.sigs.k8s.io/v1alpha3 nodes: - role: control-plane - role: worker - role: worker
kind: Cluster apiVersion: kind.sigs.k8s.io/v1alpha3 nodes: - role: control-plane - role: control-plane - role: control-plane - role: worker - role: worker - role: worker
可以将容器端口映射到宿主机端口,以支持NodePort类型的服务:
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 80
hostPort: 80
# 默认0.0.0.0
listenAddress: "0.0.0.0"
# 默认tcp
protocol: udp
可以直接嵌入Kubeadm配置:
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
- role: worker
- role: worker
kubeadmConfigPatches:
- |
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
name: config
networking:
# 定制域名后缀
dnsDomain: kind.gmem.cc
apiServer:
extraArgs:
# 启用免校验端口
insecure-bind-address: 0.0.0.0
insecure-port: "6444"
# 额外的SAN
certSANs:
- kind.gmem.cc
- 172.17.0.2
- 172.17.0.3
- 172.17.0.4
- 127.0.0.1
- localhost
- kubernetes
- kubernetes.default
- kubernetes.default.svc
- kubernetes.default.svc.kind.gmem.cc
- 10.96.0.1
可以直接嵌入Kubeadm配置:
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
# Patch一部分Kubeadm配置字段
kubeadmConfigPatches:
- |
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
name: config
apiServer:
extraArgs:
"feature-gates": "FeatureGateName=true"
scheduler:
extraArgs:
"feature-gates": "FeatureGateName=true"
controllerManager:
extraArgs:
"feature-gates": "FeatureGateName=true"
- |
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
metadata:
name: config
nodeRegistration:
kubeletExtraArgs:
"feature-gates": "FeatureGateName=true"
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
| 子命令 | 说明 |
| build base-image |
构建节点Base镜像 --image string 制作出的镜像name:tag,默认kindest/base:latest |
| build node-image |
构建节点镜像 --base-image 使用什么作为Base镜像 |
| completion bash | Bash自动完成 |
| create cluster |
创建一个基于Docker容器“节点”的K8S集群 --image string 用于启动集群的节点镜像 |
| delete cluster |
删除一个集群 --name string 集群名称,默认kind |
| export logs |
导出日志到指定目录,默认导出到临时目录:kind export logs ~/Downloads --name string 集群名称,默认kind |
| get clusters | 列出已经创建的kind集群列表 |
| get kubeconfig |
打印某个集群的Kubeconfig --name string 集群名称,默认kind |
| get kubeconfig-path | 打印某个集群的Kubeconfig的默认路径 |
| load docker-image |
从宿主机加载镜像到节点:kind load docker-image name:tag --name string 集群上下文名称,默认kind |
| load image-archive |
从归档文件加载镜像到节点: --name string 集群上下文名称,默认kind |
在3.13.0内核的Ubuntu上创建Kind集群,控制平面无法就绪。
Kind会在节点中运行Containerd,查看其日志可以发现:
modprobe: FATAL: Module overlay not found in directory /lib/modules/3.13.0-83-generic
查看Kubelet的日志则可以发现:
createPodSandbox for pod "kube-scheduler-kind-control-plane_kube-system(7d5d3c
0a6786e517a8973fa06754cb75)" failed: rpc error: code = Unknown desc = failed to start sandbox container: failed to create containerd task: failed to mount rootfs component &{overlay ove
rlay [workdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/8/work upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/8/fs lowerdir=/va
r/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: no such device: unknown
这是因为3.13内核不支持overlayfs,而Kind配置Containerd使用基于overlay的snapshotter:
disabled_plugins = ["aufs", "btrfs", "zfs"]
[plugins]
[plugins.cri]
[plugins.cri.containerd]
snapshotter = "overlayfs"
我们需要启用snapshotter插件aufs:
disabled_plugins = ["btrfs", "zfs"]
[plugins]
[plugins.cri]
[plugins.cri.containerd]
snapshotter = "aufs"
应用上述配置最便捷的方式:
docker inspect kindest/node:v1.15.3 | grep Entrypoint -A 5
# 记住Entrypoint
# "Entrypoint": [
# "/usr/local/bin/entrypoint",
# "/sbin/init"
# ],
docker run -it --name kindmod --entrypoint bash kindest/node:v1.15.3
# 修改配置文件...
# 提交为新镜像
docker commit --change 'ENTRYPOINT [ "/usr/local/bin/entrypoint", "/sbin/init"]' \
kindmod docker.gmem.cc/kindest/node:v1.15.3
# 创建集群
kind create cluster --image=docker.gmem.cc/kindest/node:v1.15.3
使用定制的Base镜像即可,将证书放入/etc/kubernetes/pki目录下。
Leave a Reply