Ubuntu 22 安装 Kubernetes 1.29 集群指南
Ubuntu 22 安装 Kubernetes 1.29 集群指南
本文记录在 Ubuntu 22.04 系统上安装最新 Kubernetes 1.29 集群的完整过程,所有步骤均遵循官方文档操作规范
容器运行时安装
自 Kubernetes 1.20 版本起已弃用 Docker 支持,1.24 及后续版本完全移除了 Docker 依赖。容器运行时改用 containerd(符合 OCI 规范的标准容器实现)。虽然 Docker 已被弃用,但 containerd 仍由 Docker 公司维护,可通过 Docker 官方渠道安装最新版本。注意:如需安装特定版本 Kubernetes,建议从 GitHub 下载对应版本的 containerd。
# 安装基础依赖
apt-get update
apt-get install -y ca-certificates curl
# 配置Docker官方GPG密钥
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
chmod a+r /etc/apt/keyrings/docker.asc
# 添加Docker稳定版仓库
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
tee /etc/apt/sources.list.d/docker.list > /dev/null
# 安装containerd运行时
apt-get update
apt-get install -y containerd.io cri-tools
Kubernetes 核心组件安装
Kubernetes 集群需要以下三个核心组件:
- kubeadm:集群引导工具,负责集群组件的部署和初始化
- kubelet:节点代理服务,负责与控制平面通信并管理节点上的 Pod
- kubectl:集群管理命令行工具,用于与 Kubernetes API 交互
# 安装基础依赖
apt-get update && apt-get install -y apt-transport-https
# 配置阿里云 Kubernetes 源
curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/deb/Release.key | \
gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] \
https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/deb/ /" | \
tee /etc/apt/sources.list.d/kubernetes.list
# 安装 Kubernetes 组件
apt-get update
apt-get install -y --allow-downgrades kubelet kubeadm kubectl
验证安装结果
kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.7", GitCommit:"c8dcb00be9961ec36d141d2e4103f85f92bcf291", GitTreeState:"clean", BuildDate:"2024-02-14T10:39:01Z", GoVersion:"go1.21.7", Compiler:"gc", Platform:"linux/amd64"}
系统内核配置
禁用Swap分区
# 临时禁用swap
swapoff -a
# 永久禁用(注释fstab中的swap配置)
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
加载内核模块
# 创建模块配置文件
cat > /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
# 立即加载内核模块
modprobe overlay
modprobe br_netfilter
网络参数配置
# 创建网络调优配置文件
cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# 应用配置
sysctl --system
安装Kubernetes集群
获取kubeadm 默认配置
kubeadm config print init-defaults > kubeadm.conf
修改4处地方
- localAPIEndpoint.advertiseAddress为master的ip;
- nodeRegistration.name 为当前节点名称;
- imageRepository为国内源:registry.cn-hangzhou.aliyuncs.com/google_containerspodSubnet 设置pod网段,这里必须 192.168.0.0/16,因为下面安装CNI 插件
Calico
已经固定网段,如果要安装其他插件自行修改 - kubernetesVersion: 1.29.2 ,查看了kubeadm 版本为1.29.2 ,默认是1.29.0
修改配置如下
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.28.14.94 #当前主机IP
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: ser
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #改成国内镜像地址
kind: ClusterConfiguration
kubernetesVersion: v1.29.2 # 改成跟kubelet 一致
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 192.168.0.0/16 #固定网段,这是cni插件Calico要求的
scheduler: {}
我这里先执行镜像下载,一步步来
kubeadm config images pull --config=kubeadm.conf
出现错误
kubeadm config images pull --config=kubeadm.conf
W0323 23:12:47.468488 54593 initconfiguration.go:312\] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta3", Kind:"ClusterConfiguration"}: strict decoding error: unknown field "networking.pod-network-cidr" failed to pull image "registry.lank8s.cn/kube-apiserver:v1.29.0": output: time="2024-03-23T23:12:47+08:00" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService" , error: exit status 1 To see the stack trace of this error execute with --v=5 or higher
这是cri没有使用containerd 作为容器,在github 找到解决方法
vim /etc/crictl.yaml
新增下面内容
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 2
debug: true
pull-image-on-create: false
Containerd 运行时配置
# 生成默认配置文件
mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml >/dev/null
# 启用Systemd Cgroup驱动
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
# 配置镜像加速(阿里云镜像仓库)
sed -i '/\[plugins."io.containerd.grpc.v1.cri".registry.mirrors\]/a\ [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]\n endpoint = ["https://docker.mirrors.ustc.edu.cn"]' /etc/containerd/config.toml
# 修改pause镜像地址
sed -i 's|sandbox_image = ".*"|sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9"|g' /etc/containerd/config.toml
# 应用配置
systemctl restart containerd
systemctl enable containerd
配置验证:
ctr version && systemctl status containerd
配置containerd 容器仓库加速,不然使用默认docker.io 非常慢的 新创建 /etc/containerd/certs.d/docker.io/
目录, 向下面结构 docker.io
表示默认Docker 官方仓库地址
/etc/containerd/
├── certs.d
│ ├── docker.io
│ │ └── host.toml
└── config.toml
编辑host.toml
vim /etc/containerd/certs.d/docker.io/host.toml
添加下面内容
[host."https://xxxxx.mirror.aliyuncs.com"] # docker 仓库加速地址
capabilities = ["pull", "resolve","push"]
skip_verify = true
其他配置可以参看Github 重启,让配置生效
systemctl restart containred
重新执行命令,拉取镜像,出现下面记录,表示拉取镜像成功
kubeadm config images pull --config=kubeadm.conf
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.29.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.29.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.29.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.29.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.11.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.10-0
执行初始化命令
kubeadm init --config=kubeadm.yaml
看见下面信息,说明集群搭建成功
.... 省略
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
根据安装提示,执行下面命令
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
执行下下面命令,验证下kubernets是否安装成功
kubectl get node
如果你和我一样,状态不是Ready,那就是有问题
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 5m14s v1.29.3
首先成kubelete 排查kubelet,查看是否正常运行
systemctl status kubelet
看见下面
3月 24 00:09:59 master kubelet[59351]: E0324 00:09:59.243528 59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ>
3月 24 00:10:04 master kubelet[59351]: E0324 00:10:04.244612 59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ>
3月 24 00:10:09 master kubelet[59351]: E0324 00:10:09.246425 59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ>
3月 24 00:10:14 master kubelet[59351]: E0324 00:10:14.248467 59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ>
3月 24 00:10:19 master kubelet[59351]: E0324 00:10:19.250277 59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ>
3月 24 00:10:24 master kubelet[59351]: E0324 00:10:24.251682 59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ>
3月 24 00:10:29 master kubelet[59351]: E0324 00:10:29.253199 59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ>
3月 24 00:10:34 master kubelet[59351]: E0324 00:10:34.255030 59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ>
3月 24 00:10:39 master kubelet[59351]: E0324 00:10:39.256634 59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ>
3月 24 00:10:44 master kubelet[59351]: E0324 00:10:44.258525 59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ
这是没有安装CNI 插件导致
部署Calico网络插件
# 直接从GitHub应用Calico部署清单
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/tigera-operator.yaml
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/custom-resources.yaml
# 实时监控部署进度(Ctrl+C退出监控)
kubectl get pods -n calico-system -w
当所有Pod状态变为Running即表示安装成功:
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5d6f6d4f7c-2hqzv 1/1 Running 0 2m
calico-node-8jrzf 1/1 Running 0 2m
calico-typha-78f4459ccc-xgh4h 1/1 Running 0 2m
验证集群状态:
kubectl get nodes
单节点集群需解除master节点调度限制:
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
故障排查指南
- 检查kubelet服务状态:
systemctl status kubelet --no-pager -l
- 查看实时日志:
journalctl -u kubelet.service -f
- 容器日志查看(替换<pod_id>为实际容器ID):
crictl logs <pod_id> --tail 50 -f
- 进入容器排查(替换<container_id>为实际容器ID):
crictl exec -it <container_id> /bin/bash