Myluzh Blog

RKE2部署高可用K8S集群

发布时间: 2025-3-4 文章作者: myluzh 分类名称: Kubernetes 朗读文章


0x01 前言
rke2部署k8s集群,主机是2台master2台worker,系统用的是Ubuntu24.04 LTS,ansible清单如下:
必须以 root 用户或通过 sudo 执行 RKE2 安装。https://docs.rke2.io/zh/install/quickstart
# inventory
[all:vars]
ansible_user=root
ansible_password='Testtest01.'
ansible_python_interpreter=/usr/bin/python3.12

[k8s_master]
k8s-master01 ansible_host=172.17.100.181
k8s-master02 ansible_host=172.17.100.182
k8s-master03 ansible_host=172.17.100.183

[k8s_worker]
k8s-worker01 ansible_host=172.17.100.186
k8s-worker02 ansible_host=172.17.100.187

[k8s:children]
k8s_master
k8s_worker

0x02 主机初始化配置

1、添加hosts
sudo tee -a /etc/hosts > /dev/null << EOF
k8s-master01 172.17.100.181
k8s-master02 172.17.100.182
k8s-master03 172.17.100.183
k8s-worker01 172.17.100.186
k8s-worker02 172.17.100.187
EOF
2、加载模块,配置内核转发以及网桥过滤
# 添加系统启动时自动加载的内核模块
sudo tee /etc/modules-load.d/k8s.conf > /dev/null << EOF
overlay
br_netfilter
EOF
# 立即加载模块
modprobe overlay
modprobe br_netfilter
# 查看模块是否已经加载 
lsmod | grep -E 'overlay|br_netfilter'
# 开启桥接流量通过 iptables 和 ip6tables 过滤,启用 IPv4 数据包转发。
cat << EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# 加载内核参数
sysctl --system
3、安装ipset与ipvsadm
# 安装ipset及ipvsadm
apt install -y ipset ipvsadm
# 配置ipvsadm模块加载
cat << EOF | sudo tee /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF
# 立即加载ipvs模块
modprobe --  ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
# 查看ipvs模块是否加载
lsmod | grep ip_vs

0x03 通过rke安装k8s集群

所有主机,下载rke2
官方文档 https://docs.rke2.io/zh/install/configuration
curl -sfL https://get.rke2.io | sh -
# 中国用户,可以使用以下方法加速安装:
curl -sfL https://rancher-mirror.rancher.cn/rke2/install.sh | INSTALL_RKE2_MIRROR=cn sh -
注意:RKE2 在启动时会从 /etc/rancher/rke2/config.yaml 配置文件中读取配置值。该文件需要手动创建,并且必须包含一个 token 字段,其值应与 Master 节点 RKE2 配置文件中定义的 token 完全一致。其他节点在加入集群时,也需要使用相同的 token,以确保它们能够成功加入并成为集群的一部分。

1、k8s-master01 通过 rke2-server 创建一个k8s集群 
mkdir -p /etc/rancher/rke2/
# 添加master01 rk2配置文件
# vi /etc/rancher/rke2/config.yaml
token: myluzh_rke2_k8s_cluser
node-name: k8s-master01
tls-san: 172.17.100.181
system-default-registry: "registry.cn-hangzhou.aliyuncs.com"
kube-proxy-arg:
 - --proxy-mode=ipvs
 - --ipvs-strict-arp=true

# 启动rke2-server,等待数分钟后安装k8s集群完成
systemctl enable --now rke2-server.service
journalctl -u rke2-server -f

## 这里是k8s集群的KUBECONFIG文件
cat /etc/rancher/rke2/rke2.yaml 
## 此目录包含二进制文件,用于启动容器等
ls /var/lib/rancher/rke2/bin/
containerd  containerd-shim-runc-v2  crictl  ctr  kubectl  kubelet  runc

# 永久添加到系统的 PATH
echo "export PATH=$PATH:/var/lib/rancher/rke2/bin" >> ~/.bashrc 
# 指定kubeconfig路径
echo "export KUBECONFIG=/etc/rancher/rke2/rke2.yaml" >> ~/.bashrc
# 立即生效
source ~/.bashrc
# master01 通过kubectl查看node
kubectl get node
NAME           STATUS   ROLES                       AGE   VERSION
k8s-master01   Ready    control-plane,etcd,master   13m   v1.31.6+rke2r1
2、k8s-master02 通过 rke2-server 加入k8s集群
添加配置文件,server参数表示第一个管理节点ip,使用https协议
mkdir -p /etc/rancher/rke2/
# master02 vi /etc/rancher/rke2/config.yaml
server: https://172.17.100.181:9345
token: myluzh_rke2_k8s_cluser
node-name: k8s-master02
tls-san: 172.17.100.182
system-default-registry: "registry.cn-hangzhou.aliyuncs.com"
kube-proxy-arg:
  - --proxy-mode=ipvs
  - --ipvs-strict-arp=true

# 启动rke2-server,会通过配置文件的地址自动加入到k8s集群
systemctl enable --now rke2-server.service
journalctl -u rke2-server -f

# 跟master01一样,添加环境变量跟kubeconfig文件
echo "export PATH=$PATH:/var/lib/rancher/rke2/bin" >> ~/.bashrc
echo "export KUBECONFIG=/etc/rancher/rke2/rke2.yaml" >> ~/.bashrc
source ~/.bashrc
# 这个时候在master02 kubectl get node 就可以看到两个master了
kubectl get node
NAME           STATUS   ROLES                       AGE     VERSION
k8s-master01   Ready    control-plane,etcd,master   51m     v1.31.6+rke2r1
k8s-master02   Ready    control-plane,etcd,master   16m     v1.31.6+rke2r1
3、k8s-worker01 通过 rke2-agent 加入k8s集群
添加worker节点到k8s集群,配置文件格式是一样,只不过启动的是rke-agent。
mkdir -p /etc/rancher/rke2/
#worker01 vi /etc/rancher/rke2/config.yaml
server: https://172.17.100.181:9345
token: myluzh_rke2_k8s_cluser
node-name: k8s-worker01
tls-san: 172.17.100.186
system-default-registry: "registry.cn-hangzhou.aliyuncs.com"
kube-proxy-arg:
 - --proxy-mode=ipvs
 - --ipvs-strict-arp=true
# worker节点加入k8s集群 注意:启动的是rke2-agent
systemctl enable --now rke2-agent
journalctl -u rke2-agent  -f
4、k8s-worker02 通过rke2-agent 加入k8s集群
mkdir -p /etc/rancher/rke2/
#worker02 vi /etc/rancher/rke2/config.yaml
server: https://172.17.100.181:9345
token: myluzh_rke2_k8s_cluser
node-name: k8s-worker02
tls-san: 172.17.100.187
system-default-registry: "registry.cn-hangzhou.aliyuncs.com"
kube-proxy-arg:
 - --proxy-mode=ipvs
 - --ipvs-strict-arp=true
# worker节点加入k8s集群
systemctl enable --now rke2-agent
journalctl -u rke2-agent  -f
5、所有节点都加入完成后,检查集群状态
root@k8s-master01:~# kubectl get node -o wide
NAME           STATUS   ROLES                       AGE   VERSION          INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-master01   Ready    control-plane,etcd,master   63m   v1.31.6+rke2r1   172.17.100.181   <none>        Ubuntu 24.04.2 LTS   6.8.0-54-generic   containerd://2.0.2-k3s2
k8s-master02   Ready    control-plane,etcd,master   28m   v1.31.6+rke2r1   172.17.100.182   <none>        Ubuntu 24.04.2 LTS   6.8.0-54-generic   containerd://2.0.2-k3s2
k8s-worker01   Ready    <none>                      18m   v1.31.6+rke2r1   172.17.100.186   <none>        Ubuntu 24.04.2 LTS   6.8.0-54-generic   containerd://2.0.2-k3s2
k8s-worker02   Ready    <none>                      18m   v1.31.6+rke2r1   172.17.100.187   <none>        Ubuntu 24.04.2 LTS   6.8.0-54-generic   containerd://2.0.2-k3s2
6、containerd客户端命令
RKE2 附带了 ctr 和 crictl。Containerd 套接字位于 /run/k3s/containerd/containerd.sock。https://docs.rke2.io/zh/reference/cli_tools
## ctr ##
/var/lib/rancher/rke2/bin/ctr --address /run/k3s/containerd/containerd.sock --namespace k8s.io containers list
# 永久生效,添加个alias
echo 'alias ctr="/var/lib/rancher/rke2/bin/ctr --address /run/k3s/containerd/containerd.sock --namespace k8s.io"' >> ~/.bashrc
source ~/.bashrc

## crictl ##
export CRI_CONFIG_FILE=/var/lib/rancher/rke2/agent/etc/crictl.yaml
/var/lib/rancher/rke2/bin/crictl ps
/var/lib/rancher/rke2/bin/crictl images
# 也可以直接永久指定 CRI_CONFIG_FILE 并加载
echo 'export CRI_CONFIG_FILE=/var/lib/rancher/rke2/agent/etc/crictl.yaml' >> ~/.bashrc
source ~/.bashrc
7、kubectl自动补全命令,master节点安装
# 安装bash-completion,自动补全kubectl命令
apt install bash-completion
# 启用 kubectl 自动补全,加载其补全脚本
echo 'source /usr/share/bash-completion/bash_completion' >> ~/.bashrc
echo 'source <(kubectl completion bash)' >> ~/.bashrc
source ~/.bashrc


8、

标签: kubernetes rancher rke2

发表评论