深度解析移动云容器服务KCS创建流程
myluzh 发布于 阅读:120 Kubernetes
0x00 前言
移动云订购容器服务CKS的背后,隐藏着一套精密的自动化装机与配置注入流程。为了打破这种商业云服务“黑盒”带来的疑惑,本文将带你深挖底层的 ConfigDrive 机制和自动化部署脚本,一步步复盘一台纯净的云主机是如何被自动变成标准 K8s 节点的。
0x01 用户订购操作
在移动云控制台订购容器服务 KCS ,选择: 3Master + 2Worker + 规格配置。创建后信息如下:
| 节点名称 | 角色 | 污点配置 | IP地址 | 公网IP |
|---|---|---|---|---|
| kcs-k8s-test-m-tp88s | master | 有 | 192.168.11.139 | 36.134.185.103 |
| kcs-k8s-test-m-jj48s | master | 有 | 192.168.11.38 | 无 |
| kcs-k8s-test-m-6thhn | master | 有 | 192.168.11.246 | 无 |
| kcs-k8s-test-s-psfjf | worker | 无 | 192.168.11.71 | 无 |
| kcs-k8s-test-s-72lh6 | worker | 无 | 192.168.11.156 | 无 |
0x02 揭秘 ConfigDrive:云主机自动化初始化
当你在云平台上点击“创建一个 K8s 集群”或“添加一个节点”时,底层其实是通过 OpenStack 生成了一台虚拟机。为了让这台纯净的虚拟机在开机后自动变成一个配置完毕的 K8s 节点,云平台使用了 ConfigDrive 机制和 cloud-init 工具。
机制原理解析:配置注入与虚拟光驱挂载 (/dev/sr0)
/mnt/cdrom 其实是一块虚拟光驱。云平台把节点需要的初始化数据打包成一个 ISO 文件挂载给虚拟机。虚拟机内部的 cloud-init 服务在操作系统启动时,会自动读取这里面的文件并执行自动化装机。
[root@kcs-k8s-test-m-tp88s /]# df -h | grep sr0
/dev/sr0 666K 666K 0 100% /mnt/cdrom
# ConfigDrive 完整结构
[root@kcs-k8s-test-m-tp88s /mnt/cdrom]# cd /mnt/cdrom && tree
/mnt/cdrom/
├── openstack/
│ ├── 2012-08-10/ # 旧版本格式(兼容)
│ ├── 2013-04-04/
│ ├── 2013-10-17/
│ ├── 2015-10-15/
│ ├── 2016-06-30/
│ ├── 2016-10-06/
│ ├── 2017-02-22/ # ← 实际使用的版本
│ │ ├── meta_data.json # 虚机元数据
│ │ ├── network_data.json # 网络配置
│ │ ├── user_data # 核心安装脚本
│ │ ├── vendor_data.json
│ │ └── vendor_data2.json
│ └── latest/ # 符号链接到最新版本
└── (其他版本目录)
核心目录剖析:元数据、网络拓扑与用户脚本
meta_data.json (元数据):
定义了这台虚拟机的身份。包含它的 hostname (kcs-k8s-test-m-tp88s)、uuid 以及用于生成系统随机数的 random_seed。
[root@kcs-k8s-test-m-tp88s /mnt/cdrom/openstack/latest]# cat meta_data.json
{
"random_seed": "ekNPQTUy4qs7343bn34S/VPCaMG028SAT6fqdO+jwgHDTbySM/DVGgwyjqGzeNTR8J/1VMtVoqhlijwXzNDMEwjLV2S6F5xK0WAUkrWOWkfceSLR9OdpMX7iFcoFkYNA1CLsDMjPoCTggF3WJFcpd5uzgEqKiSJ5s6W1uhrZuDnFwW/VFuogtPdjDRlY4/Aeri/yIHxlJUbbW5ckHuYgnezIE1jexEDZyc7vzvKr8os6Ah1Fn+74WLpxnbKjI6BVDfIFU+qQhrIRzx5cZ7BVKL8DUbWK1M4n89nFb542Yr1cQLpcK8n4XNvkgegbjRSqLP08zyw5lCYJnMv5NYHVm6+52lx0bZgLF1l71FbBzVDbAd+zB8uFPd/zUKlyTx+tsriAFyPsa9vVNzJhvrbCXm/d6UaW0OwoRMg2BLnivphK2wKVGltWNvfYAyWdeT7NobgJGGQW+uK+sbyZ6aC8IVZAYjCzffYINGXqVolRfmLm8HwhxGoH603K98+I6nQjRvhQs9i8qut0gVgu+4hHeX90f61NDLqmJ35i3NULl3pObaC7s0BbkT52N7WxtBZVqnsv3Et7pMHEoie+ZXFvyQRYCGusv6TIuyZBuWOTkZdYYi1tMlAodaEwJAA3uKYEfGqtnjyDcCzVNSYruYUiPU5GXn7hBFsYQ9G6aHjh1/c=",
"uuid": "78b66a14-fb70-4fe3-ab2e-74c3b57f5c03",
"hostname": "kcs-k8s-test-m-tp88s",
"launch_index": 0,
"devices": [],
"name": "kcs-k8s-test-m-tp88s"
}
network_data.json (网络数据)
定义了底层网络拓扑。cloud-init 会读取里面的 IPv4 (192.168.11.139)、IPv6、网关、DNS (211.136.17.107) 和 OVS 网卡 MAC 地址,自动写成操作系统的网卡配置文件并拉起网络,确保这台机器能通网。
[root@kcs-k8s-test-m-tp88s /mnt/cdrom/openstack/latest]# cat network_data.json
{
"services": [{
"type": "dns",
"address": "211.136.17.107"
}, {
"type": "dns",
"address": "211.136.20.203"
}],
"networks": [{
"network_id": "213e8096-d2ba-4dd6-b1d7-82e549507be9",
"type": "ipv4",
"services": [{
"type": "dns",
"address": "211.136.17.107"
}, {
"type": "dns",
"address": "211.136.20.203"
}],
"netmask": "255.255.255.0",
"link": "tap7399fc12-09",
"routes": [{
"netmask": "0.0.0.0",
"network": "0.0.0.0",
"gateway": "192.168.11.1"
}],
"ip_address": "192.168.11.139",
"id": "network0"
}, {
"network_id": "213e8096-d2ba-4dd6-b1d7-82e549507be9",
"type": "ipv6_dhcpv6-stateful",
"services": [],
"netmask": "ffff:ffff:ffff:ffff::",
"link": "tap7399fc12-09",
"routes": [{
"netmask": "::",
"network": "::",
"gateway": "2409:8c2f:3800:5795::1"
}],
"ip_address": "2409:8c2f:3800:5795::35a",
"id": "network1"
}],
"links": [{
"ethernet_mac_address": "fa:16:3e:ce:43:86",
"mtu": 1600,
"type": "ovs",
"id": "tap7399fc12-09",
"vif_id": "7399fc12-09ec-49ec-84c1-053df9d05833"
}]
}
user_data(核心安装脚本)
这个文件是标准的 cloud-config 格式,分为三个主要阶段:
阶段 A:系统权限设置 (disable_root, ssh_pwauth)
开启了 Root 用户的密码登录权限,方便后续的自动化脚本或运维人员介入。
阶段 B:文件注入 (write_files)
云平台管控端会预先生成好集群所需的各种核心文件,经过 base64 编码后塞进这个列表里。cloud-init 会将它们解码并写入到这台机器的指定路径中。
1、Kubernetes PKI 证书体系 (/etc/kubernetes/pki/...):
为了保证多个 Master 节点的高可用(HA),它们必须共享同一套 CA 证书和 ServiceAccount 密钥。云平台在这里直接将管控端生成好的 ca.crt/key、etcd/ca.crt/key、front-proxy-ca.crt/key 和 sa.pub/key 注入到了节点中,从而绕过了 kubeadm init 默认自己生成证书的步骤。
2、拉起脚本 (/usr/local/bin/deploycluster.sh):
base64 解码粗略看了一下,主要功能是定义了一个下载函数,去内网文件服务器(FILE_SERVER)拉取名为 ecloud-k8s-script-v1.6.5.tar.gz 的核心部署工具包,并解压。
3、Kubeadm 配置文件 (/etc/kubeadm/kubeadm.cfg):
这是 kubeadm 初始化/加入集群的配置文件。里面写满了这套商业化 K8s 发行版的深度定制参数,例如:
指定的镜像仓库:cis-hub-huadong-7.cmecloud.cn/ecloud
定制的 K8s 版本:v1.29.5-eki.4.1.0
网络和组件参数:开启了 IPVS 模式,挂载了特定的 Audit 审计日志路径,配置了 APIServer 的高可用 VIP 等。
阶段 C:最终执行 (runcmd)
这是机器启动时的最后一步,依次执行以下命令:
echo 'root:... | chpasswd -e:设置 Root 用户的密码。
deploycluster.sh --file-server 10.195.207.205:32092:执行刚才注入的下载脚本,去内网拉取并解压 Kubernetes 部署工具包。
kuberun.sh ...:这是终极安装命令。它接收了大量的参数(节点角色 deploy-masters、网络模式 calico、容器运行时 containerd、实例规格 c5.2xlarge.2 等)。这个脚本会调用系统里的 kubeadm,结合前面注入的证书和 kubeadm.cfg,正式将这台机器初始化为一个 K8s Master 节点。
[root@kcs-k8s-test-m-tp88s /mnt/cdrom/openstack/latest]# cat user_data
#cloud-config
disable_root: false
ssh_pwauth: True
write_files:
- path: /etc/kubernetes/pki/ca.crt
encoding: base64
owner: root:root
permissions: '0640'
content: |
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM3RENDQWRTZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQ0FYRFRJMk1ETXlOVEF5TlRVek5Wb1lEekl4TWpZd016QXlNREkxTlRNMVdqQVZNUk13RVFZRApWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCnljVXhSbk1DUVRGclFwRjBQejJaUUpqU1RlcFMrM2dKQVhPTExXajArRUNxTll3ZTJ1cWdtVjB1RFR5a3AvbFUKN2dWWnhNeTVUTTg2MDE0djFKRXd5dEVJeCtRTkp3NUV6Q1R4cFB1QVhETHVCNCtpUjd2RVEyalVvUEVrajE1RApvMFpPQkhNVlBNWmVtVmhyN0RhRUROS3ZoRTIrb2NFQlpsTEdsYmZ2L2NSdWtsbEUxcTdXcVBDbmtCNUt2V3RyCjBaZU5odDZRQUxZQ3BVODZCeHJwUGFTbUt1RzNuTFVSZzhnYXl2L2U4MUVDRnZOSVJpN2lieXFpRnRsU2tJUkMKckZBZjB2a093K3llZTdRRkg1NVZLQ29xdGs2NitTVkoyQ2QzV1pzZGJpVE5HbUdMYkdMWm4xSldscU10OVJ2RQptaXhCTHFJWXNTWkNTYjVLdHlKVUF3SURBUUFCbzBVd1F6QU9CZ05WSFE4QkFmOEVCQU1DQXFRd0VnWURWUjBUCkFRSC9CQWd3QmdFQi93SUJBREFkQmdOVkhRNEVGZ1FVN3MvejM5MkxBaHh1ZEJZVVYxeFNaL01oN2Fnd0RRWUoKS29aSWh2Y05BUUVMQlFBRGdnRUJBQ2YvZ3BnRjAxTmJOL01tQTF6SkxIWXdzaHBKZlNLQ0hjK2RncEFsV3dLZApUbkpGaDRGZ2s1Y0FmbnhpQzRPSWxqUXBBVlFMWFl0MXYwalM1eEk2TmJxZlBEM0NNUUl0VWJ1UzcxdGVFdTZUCnlzdlFwYzRTSGt5Y3ROOXVuT2VjcSs2MVhkVzBnZmRDUzZ3a0dtai8rVkxkbzR1RmZiQUtsZUt5RE1SVVBTOWsKdHBNTHkxSDBVelptSDBWTC9GYnhpVkdZTUdlM1J2ZkYxYmVPY0ZvT3N2YllMVy8rZldMS2JVUEpDS2dtbGc5TAptb3ZFSlFjZ2RMOThzbmQvdU1EcWNia3RWaFM1VE5oNVNqL1V4ZVk4U1BsVGhSVm42ekM1dUUwdU9uSWVVZ2pmCkhvcEczZmw2L0VQWGxEbnQrbnBnd25hTTFxcHZjVG9KaDY5bnpTdm5yV2s9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
- path: /etc/kubernetes/pki/ca.key
encoding: base64
owner: root:root
permissions: '0600'
content: |
LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeWNVeFJuTUNRVEZyUXBGMFB6MlpRSmpTVGVwUyszZ0pBWE9MTFdqMCtFQ3FOWXdlCjJ1cWdtVjB1RFR5a3AvbFU3Z1ZaeE15NVRNODYwMTR2MUpFd3l0RUl4K1FOSnc1RXpDVHhwUHVBWERMdUI0K2kKUjd2RVEyalVvUEVrajE1RG8wWk9CSE1WUE1aZW1WaHI3RGFFRE5LdmhFMitvY0VCWmxMR2xiZnYvY1J1a2xsRQoxcTdXcVBDbmtCNUt2V3RyMFplTmh0NlFBTFlDcFU4NkJ4cnBQYVNtS3VHM25MVVJnOGdheXYvZTgxRUNGdk5JClJpN2lieXFpRnRsU2tJUkNyRkFmMHZrT3creWVlN1FGSDU1VktDb3F0azY2K1NWSjJDZDNXWnNkYmlUTkdtR0wKYkdMWm4xSldscU10OVJ2RW1peEJMcUlZc1NaQ1NiNUt0eUpVQXdJREFRQUJBb0lCQUJiM2ZiZU5yQ0l4N0Y5Mgo3QkJZeENMV0RRdUx2U2RjK2ZWeTFWTDFhL3VvRjFmTXNTUnRGRVp3djFHeXZKd0Jld1BlNkdGemhqSWRNbzU3CkNFbStGSERrUzF5K3lOVWx3Q1NyOHpUbGl5NWVLUThEbXFRU0FQSHdDRktnMnU4QTBBVW1vYkhLOXc0UEZ4Y3cKWStuTzAzTEc0VU5Ya0NLY1hycnF6WCtlOWlVMTdLbXNMbUZSbkZWSXl5NndDV3l5VW5vSFRrSk0xL2ZMYThTaApwT1lhS3QwSmNTbWgzc1UxOEVwdmhmSThCSzl2ZWNEWUJOTmVsRWs0bjM0cDU4RWJaOGQvWFo0RWRnMXdpM2Z6CmhYaGxTK3d2eE93NUdDUWVlcU01Z1o3aGtuMzdHb1oxckZ4V3JWNmFNelYvMFNiRG9haitJWWZvbGZwZFAvZ2oKcmJQTHI2VUNnWUVBem1ENGNrVWZka3VTdWF0S0haRVN1M0JxU0NObkVobnRMcEU4YmU0bVplSEdxemlRcFZFZQpqbmJqWDNDRTVhR2RiekhnZDcwdTNZaVpJSWo1d0c2YkhhYzdDWWZOaS9oUGpickY0N1lGK1ZWOHJyZWxWb1YyClYrWUx5V2VDUnEyeUorWjFiOXFMeUpFaUdZbldQTlRrNXBTeGtFUUxmZnVrSDJ0RHFGcnJzVzhDZ1lFQStraVAKSHNmOHdjempNZnJMc2o4OE9ub1FjTVh2ckM5UlNjNmxyZURteklRVS9zVTJIOXA5R3hMazJEWUZCMW5JaVB1OQo5eVZQNWZueGluajNYSExaRHp0WUhhMEMxNzFHN3NCUUZmRWs4MktWN1hobXJjazFmbVNicEZMWGdPQ0dmSEwyCk5YU3hMMUJEZnpIMEkyaTQ3LzVjSXZmTDM4WWdqak1jaG1NTlZLMENnWUVBaThMRHZhN3Q5WkNNVnN5WExwcTIKVXRWNFJFNGxXTzdSM3IxZ2JSbmdTeEt4RmZjQ2pkSDNuWWNKeC9KTkxhMWJEcGg2YU54blJvTmhIOVZqUFZ3cQpFOVRTZUV2TmVVSzVyVU9WQy9hUzZSMXBpSEM1dVhROGhwNDEwVGtWMG9PQ3FONjdIUHFsdXpmK0hjbG9tbDJhCmZrU29Vd2lodDdtWWxlWndOUzBOZkdVQ2dZQi84R2xneGNBNTNSOWlaQjZPUG03dVFZbDM3R2FvOFFNdnBIZmkKMjIxL3JDRURYeEpjMUJaUnFhWGJ0RGw3MlhSK09abVE1YnpqQlpKb1E0L0c3VnB4dzljMlRFT0F2dHVzbmhnUwpMMVBCS21zVG1oRjYwcmtLcENrL3BhMU56dmhRVTMveU1YV0ZoeFVKeHlKU20yeTJHYU5GcUwvSjR3Q3ZVQWRMCjF3UndmUUtCZ1FDK1VKVnhGcGlHcWI5cG1oWkhDcFliamQ1SldVZnZMRDF2Mi85UVlqL01jVDFDMHlGaklLdloKQy93SG1GV05kcGgwRjd1NXAvZGFaMzRWT1lLcFZYOXZxMVJ0eXVsNUdmRFBlcFRHMHNtZnA0QkhYanZEems1dgpTT0pnUTUrTTYxMEs1SFUxV2JMM2pCQm5GREliTmdDcy94ZjZUVDdUTXVRYWZJN1gycFF1Tnc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
- path: /etc/kubernetes/pki/etcd/ca.crt
encoding: base64
owner: root:root
permissions: '0640'
content: |
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM3RENDQWRTZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQ0FYRFRJMk1ETXlOVEF5TlRVek9Wb1lEekl4TWpZd016QXlNREkxTlRNNVdqQVZNUk13RVFZRApWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCnJRV3FIZUVWaFRHTzFYWDJrd0JFVzZUbFhScit3WmRwUG9ud1dKL0p2eHBjNzJQRXFSOEdCaDkvOUJWWHJ0aGYKQkxhMFRWWTBMelZjTzIzaFV2Z1FRcDNQbTlLcmhnZjluNGtmY3lsUzB1cWFTUG5aS2dvYzVFUUE3eUxUTVZ1cgpFK0FtazdsNFFQczk5NVdNeG96WVUrdkxYeVYxUEpJdW1kaWlmNENSOTNieWU3Q01uRUQ0djNwYldudlB1UU96CkZrV09qMjBFNXhVN3BrWEtiejd3Y1Z5TWc2cEduN1pVVVVoUXRHMCtBWWtrTW5NRFlmaU1JNUNhZWgxSVVoV0cKSXRsU005a1k1cTRaNnFpQ0lId0JNVGtqWWlya21XV2dWRTRob3hRMjV4b1NJNGl4eDE0RG1lY3U0a2lNbmd2QQp1aW1NR1dlaVRBd2JVRkRqNzNmNyt3SURBUUFCbzBVd1F6QU9CZ05WSFE4QkFmOEVCQU1DQXFRd0VnWURWUjBUCkFRSC9CQWd3QmdFQi93SUJBREFkQmdOVkhRNEVGZ1FVb1dZM2VjODNEaEpid2FQbDg0QXQybTVsWkVnd0RRWUoKS29aSWh2Y05BUUVMQlFBRGdnRUJBSmdBc3dxWkI1cTJHYllJamVTMkpTUDhCSHI3RE5NaGNMd0hyUjNQejhtVwpHUDZjNVpqNjBiekdKaytnNEVIaTJDWTJXaGpZai81VEVNUVFoWkRhamtCN1A4eVFzMndBb1cwek96TFE4c2dVCjIyQTlMRk9NaXVKZWZibDJGWHpKUXNqODRsUW5uTmVxcGhVaFlZeFJmT0NoSDFLUkNaRUZLWURNNDd5Y1pMQ3gKZEJXbFhOdTVSZkRFa2xlb3B1K3hEbCtGNzFiYkZVK0FCRTRvNm1XMnVSVk8rbkd1d0xVTDc0NFZyd21tZ1AwTQpRcHpXbjExODB0ZTZRMzNRMDdrN1ovKzlkZERTOWZrWm16anpZckh6NDB6dW9TbGRGVnE0TlBiWTk4c3dHWkdwCmR3UHFReXdscFRROGdhZ0haOVRmV2x3S3hGUGIrSWhFeEZlYXQySGZxZHc9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
- path: /etc/kubernetes/pki/etcd/ca.key
encoding: base64
owner: root:root
permissions: '0600'
content: |
LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBclFXcUhlRVZoVEdPMVhYMmt3QkVXNlRsWFJyK3daZHBQb253V0ovSnZ4cGM3MlBFCnFSOEdCaDkvOUJWWHJ0aGZCTGEwVFZZMEx6VmNPMjNoVXZnUVFwM1BtOUtyaGdmOW40a2ZjeWxTMHVxYVNQbloKS2dvYzVFUUE3eUxUTVZ1ckUrQW1rN2w0UVBzOTk1V014b3pZVSt2TFh5VjFQSkl1bWRpaWY0Q1I5M2J5ZTdDTQpuRUQ0djNwYldudlB1UU96RmtXT2oyMEU1eFU3cGtYS2J6N3djVnlNZzZwR243WlVVVWhRdEcwK0FZa2tNbk1ECllmaU1JNUNhZWgxSVVoV0dJdGxTTTlrWTVxNFo2cWlDSUh3Qk1Ua2pZaXJrbVdXZ1ZFNGhveFEyNXhvU0k0aXgKeDE0RG1lY3U0a2lNbmd2QXVpbU1HV2VpVEF3YlVGRGo3M2Y3K3dJREFRQUJBb0lCQUF4M3ZJYnpxYnZUMHVHUgo0d2M3dlRGSFpCbTk4TDZkZlFGN0toMFF3cFpwUFdvb3E4cXVDQjZYMVg0T3JhZFZReCtSVk5PLzB2blY1QVFLClNuTFNta1ZhbnRPeExoZjE2bkk5RE0yZEhERkRvNE4vc1lUa2ZxbDZOd0VFWnVpSEhRQk5KaXA5OG1yb1Q5SlAKN2ZsK3U3WHNaMWEvV2IvWUh0Q0tPa2RxeWRubGszWVpTVTY5ZkdDWnF0ZFVCQW9vMU9LSGwreWJycUFWdXphRgpMcW5QZlF2ZmZvSW9FZUxkcmdNZEhzZ2dyRHBqOTlRMVROa25qNlNKdDNhVzN2aHQzTWNkZUdjbjcxMDNwWjJjCnVmS2tYSDhLbi8waXNiS2FLVzQzYW9JeUNlQ1p6dHg4VmZYU3UrbHFNNkg4KzVpV2kva0NsQk1jMDdWOHd3bmQKMUpjT0pYMENnWUVBeFlYOGxkcHJXUDFKV2JLSFRIMUlqVjNsSzBsZmRsa0NuTEdiZ05hdEtXTXFRUjhPUHpoTwo3Z2NwZjFEcDMzTWNneWxzME9SQ1VWS01pRGdHV0IxdzlLQzc3dHA4WURzUk9XZUR4eXdpYjd2N0p0aUwzNFF4CmJqa2loZGsxb1hOQVhEWXgvbDIyZ1IrbnN6SFh3WkhQZkY5SzI1eG9Ra0hHbG81TTlJV2o5OVVDZ1lFQTREN0QKUVh0MHNWZzJGQUdEOUtlL3BPMmNTbTlvaUlYNThKdDRmS2t0RjhlUGZFZnR6TnkxS3hyZTFKTFc5NENlSEExVApxSUZ1RnRFTTlpSnA1cnJmVWJ1cnZTT0R4dzN5VVpxN2x1bTFUd2ZYdUdrUEd1bXVtTStWdGFySFBqa3Z5OEhhClM0cGIwbDkyZWFWbkk3TzdiK0doTDJoRW91T3ROOXNsN1NTUVhJOENnWUVBbEFqb0ZmTTlzdE1aanlVUzY4dVYKZllXYWhJZVlDUjJLcko4YnVVS3JRckowYjV2ejFJUEIrL2pZSy9nYlg0RnBKQS8rNHN1L3ZDME83K1IxTk1MVAo3ak1zeGtWdkk3d0JHN0d0L0s3aUhEV1pkREtsR2Q1OElXeW1xQVB6Z3MzYXRZRlVsSnZ0ZFBhaGU5Wm1La2U2ClppOFE3bWhaWnhiZTIrVklYWlp2SGdVQ2dZQUZLQXhQWGlwaHhaaUF2MFFzaFFyNEhPcWlINHUwei9mZVc0VGEKd1AwamRkaEwwRStjalZxeElnNExyMUM0SWtJQWZTSDJWdnVVRkx5S2tHSUZCemtKWlJwZTRBa3dzNVpsMy92KwpUV040N01JK0lGUlRseG9IczRaS3hpR013YjNpbnBPSmR5WURZV1NWQ1lPa280WmszVGhhb2JncVVyZng5OTBZClplWFg2d0tCZ0RCWi8vOWFmSlZLUmJ1Zms0RzJKUlhpRDkyYzFVR25xUU1uT0NQOERjcURDVEVBTmtRS21EK1cKMG5ZZTRZSmtxODlWemc2OWM5WVFVTVVaZ2ZrdFFIdWg0MW5ZQy93TlA4NDhla3FCV05jbDJqWVNaUHp5NUxDYQpraEFXdzBiOEJpdDIrQU14clFTMVl1YTZ2NUZHTDJ1K21NYitHcHFKSFdueXRwNVRaZ0hRCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
- path: /etc/kubernetes/pki/front-proxy-ca.crt
encoding: base64
owner: root:root
permissions: '0640'
content: |
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM3RENDQWRTZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQ0FYRFRJMk1ETXlOVEF5TlRVek9Wb1lEekl4TWpZd016QXlNREkxTlRNNVdqQVZNUk13RVFZRApWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCm9tVEZyQ2JQeFNwRktEWHVJUDU0OU9yUUhMUUlIMXFkM1lGWnVDTFg1NE9ESG1kNVdaUGtJTFVmOEF2RkZFQksKbGpPOEV6Y0xlR0w1OWI3bm9FLzdFeHNzdnovZGsxNVVJRWxya2sxOHpZNEpDQUNEeWZCbGZrT0xENzlqa1NBSQorbzYyS2E0S2J0bVJjZFJMSkhSOVc2TGxqMFBkKzhwb3ZuN0VuT3RqSmxRdG1oZ1BxV29YalgxTXF2MkFKTUVrCk9KbDBiTWRxeExaU2RiNVpHOEpvV0lUb1ZzRHFyckgyc1FSdmFDUGVDeWpCbjNwTW5nVmdiODMvNDU0Vk9URjIKS1BLeFlZQmcyWGptUVZXbGh4clNTMXA4UTI3bFpvS0hmT2k3dmJJY2piamFqS2NieDJxQWVrdW1BUWFmcTV0Rwp2V3JuQ3pnelRBNTE1cHErOTRMWFNRSURBUUFCbzBVd1F6QU9CZ05WSFE4QkFmOEVCQU1DQXFRd0VnWURWUjBUCkFRSC9CQWd3QmdFQi93SUJBREFkQmdOVkhRNEVGZ1FVUGZUUzROTDczSWpVWTZUOTdNUkdtNEpDVGVJd0RRWUoKS29aSWh2Y05BUUVMQlFBRGdnRUJBRDFYcFFaNU9QSXRjZEFLTmM1eVFPV1U2emxsK29ja25XREhPQ1FVZUNNQgpjb05YazNZV0QvOUpsb2JuVEFGSUpGQWJaL3BjbnlRZVl0WEJ3b0c4bExhMnNITEE1emxaYWkwdy9sc2w4VDk4ClpjT3ZPN0VobXFBS3NRRm1UTXdUQTlHMG1PeU9jOHI5enVSYUdkdnE5UWgxRWVYMkhIVVJxa3FZY0IvZXFnNlYKRjdJZDE0aDVqMEExRThvY3ZuMTBndEcwOE9YOFpGNGJxMDJwdXJ2dGVnNjB0OThnV2k3aWJLbkFHVWdoNGx5VApZdmRyYTQrK1ZsYUJyWnJocVFERXB0NGdieUQwUWlTc2dmUENPT1UxVi9hOUZDb3dpbzJIMUx5dStNMEtxSEMrCnExb1UyYTRLb0hBVmJ6NERaSzJieXRNN09BVTN3RUM2ZGtaUkNrVDBGRWs9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
- path: /etc/kubernetes/pki/front-proxy-ca.key
encoding: base64
owner: root:root
permissions: '0600'
content: |
LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBb21URnJDYlB4U3BGS0RYdUlQNTQ5T3JRSExRSUgxcWQzWUZadUNMWDU0T0RIbWQ1CldaUGtJTFVmOEF2RkZFQktsak84RXpjTGVHTDU5Yjdub0UvN0V4c3N2ei9kazE1VUlFbHJrazE4elk0SkNBQ0QKeWZCbGZrT0xENzlqa1NBSStvNjJLYTRLYnRtUmNkUkxKSFI5VzZMbGowUGQrOHBvdm43RW5PdGpKbFF0bWhnUApxV29YalgxTXF2MkFKTUVrT0psMGJNZHF4TFpTZGI1Wkc4Sm9XSVRvVnNEcXJySDJzUVJ2YUNQZUN5akJuM3BNCm5nVmdiODMvNDU0Vk9URjJLUEt4WVlCZzJYam1RVldsaHhyU1MxcDhRMjdsWm9LSGZPaTd2YkljamJqYWpLY2IKeDJxQWVrdW1BUWFmcTV0R3ZXcm5Demd6VEE1MTVwcSs5NExYU1FJREFRQUJBb0lCQUFueG03MFAzMWNXWUllMgp6YThOaGdDUlJFOE5veFd3YWN3L2VHdnpEajlwNlNSNmQ0N0pwSVZ3TWRWMEV3eExaNFhOQXk0MkI2akdmc0hTCmY4SnNRMWFIS25WSGh0ellRTlI5U20zNStyTm5pQndLVkFlUWhkWjJjbFJ6aHJoRE91bUV4WmpGeEhQSE5NWEEKbHgxVFdMMjh1c3ZML3hMRThTY0JBaThOcHRPWTZQMkkvaTFLekNIaWs2ZHdXVGZDZy9BeUZGenlLT0Q3UmtVVApyZ3E2Qmx3UlJEL0pPc1RIMFJET0RId3RaTGhUbEFRVUppOW5HenE3S25EMFgrc3dBaUdHbSt0a05Ra1QxQ2FLCkswTHdGNFBRNTFXRGRLZTJDM2hYZFEzK3BzVTZWT0VLYUN1NWJIbEY3UC9UWTdiczhENkNkdG1CSHh6Y2w0THgKNS90Rm9ORUNnWUVBekNLS0JmbjBYT3ZNOTRoelp1KzBWWUhWdTNDRDE4RWpQbXdpWkN3L05VZXFXbkRqZTFjagplaGhKMWt6RUN5Y0g0cWlpaHZMT3pGVnR6Y2lwUTRGOERDcXpFbURWMkYwM0cvcm9sWXNpbGpEUHhTUEI2YmFWClU0eVVmbEVHQWlreENHRjNoTUxZVHpJUkhhdlV6b2tFNmZBdzFMeGh6T2oxU3ZGRm1jWTV0STBDZ1lFQXk2ZEYKMThnRmplN2xnRkpSNjBVRDVWQ2hueVhTek0wVjl2MU02UHI5clFnUy9aaDBOSmFjTmtIbVYyTFVjWDloZUpsUAo4Tkk4VitTakdtNDNycGlFeUtDZUVkUlNlTmFJUkRrakF3Y01DNkNMSnYxT0RJU1JFeC9zUytCcE14YVJ4QXY1CkdQU0ZhOXozdVdlN3hCbnZCZ1BhR3JzSUhtNFc2VWdMaUlHZ0pLMENnWUFZTXV2N3cyTEZkU3FLR1lIY3JRUEsKc3laOEh0MXlRVElGWDFwQVY4SnlkWGxyV1VDT1NZa3FHeUQ5cDRJQjlIR0ozQVhRUzQ1YVNMSklsOFlBKzZPUgo2YW5xdnRINjRTbjhSaVUyUFJVdmlyL0dsZk9SMmhRZm9HV21COExYbEx4OFN0bVpRbVBVRjVKUjJ5SFNEZ29vCkZWSWtsZVJlSHl1YzQ3Y2xnSXNzclFLQmdDWWM2dlJFS2MzelBKNDBTY0ozQ3hDYWMzVGVWa0lmeTVHS3ZCOEsKQWZtay9qRFpuRDNQUmZMZGlHY29Sc3ZxNCtuMi96LzVpSE9HaFlQSHhzSDFKenlJMnF4SmlSbTJSSkJJQlNabQo1amt5MVhmNWhlYlAxSHE0eWJjMWkxcVZTYmhmNlVGaldhampGTFZ0RlhYUXlLdmVncTNuL00vOUdHcVdJaHBzCjcvU05Bb0dBSmhjMVRoZDVYYVphakJjdDY4ZzhEU2dsQTJ0ZTluRkZCUU5rdnZZZU9yd3R2emNlK01Yb3dJclMKNUt1RmloRk44VXRWRjI4NEFnaU9LTzJNcG56NDMxU3hTZ3ZvRU5ENjRONWFUZksrVEE1c1paMDZsNUJQNHg3ZApzZjczdk1BaTdKMnk4R2Vyc0hMaFZQQXF6cDRmdVljYkJneUtnMk1lL3hieWhDMjlyOHc9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
- path: /etc/kubernetes/pki/sa.pub
encoding: base64
owner: root:root
permissions: '0640'
content: |
LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUExWUxyRWxlb04vSzdITnlwSnlkTgpBQW5lbTdkQk5veFl5bFdoUEh2S3R6UWlSZzVYYVcycXlOUDVqSDVTZG1hVlBxbEJ1NnN4RFV1MDBPZ0dWWnRKCm9UVStPZ09jNlN2Sm8yakFQd2J6QnJHUGUxMDFSRE5MNFB6dHpUOXVaZ1h3ZmdrMytZZlpqc0V2YjZsRTZISlUKSzBVUG9FVjBNZForeXpRQmsvUDFkbDluZ04vZWpFMkxaMWVsRXFLaEszeW9VY2p0bEQ2ZVQya0FQV1pOWFU3Wgo4ZFZFVlF2dG81SE1CUUdmNzZldWtuRXZERk4zV3UxdjMvdVdKSUhzSHRhbXk2UXFOM2RwVkF5T2RVWHgwM1lQCmVSSGFuVzRLUUVIOTFubG9vOUFpY1BKYUlvNFpZeE5wUEhzZlhDbXVYRHhJcjBwU29OMlB6NE83WGhRbmgzb2wKMndJREFRQUIKLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg==
- path: /etc/kubernetes/pki/sa.key
encoding: base64
owner: root:root
permissions: '0600'
content: |
LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMVlMckVsZW9OL0s3SE55cEp5ZE5BQW5lbTdkQk5veFl5bFdoUEh2S3R6UWlSZzVYCmFXMnF5TlA1akg1U2RtYVZQcWxCdTZzeERVdTAwT2dHVlp0Sm9UVStPZ09jNlN2Sm8yakFQd2J6QnJHUGUxMDEKUkROTDRQenR6VDl1WmdYd2ZnazMrWWZaanNFdmI2bEU2SEpVSzBVUG9FVjBNZForeXpRQmsvUDFkbDluZ04vZQpqRTJMWjFlbEVxS2hLM3lvVWNqdGxENmVUMmtBUFdaTlhVN1o4ZFZFVlF2dG81SE1CUUdmNzZldWtuRXZERk4zCld1MXYzL3VXSklIc0h0YW15NlFxTjNkcFZBeU9kVVh4MDNZUGVSSGFuVzRLUUVIOTFubG9vOUFpY1BKYUlvNFoKWXhOcFBIc2ZYQ211WER4SXIwcFNvTjJQejRPN1hoUW5oM29sMndJREFRQUJBb0lCQUJpaWFjL3NlRGE0VlZsdgpwajZqdkxFZjhtVENBSTYwSTd4NG84bFFPU1BwS25rdHgyMGRINkxiUGtRMUFQdXpPMDRIQmxRS1hQYjlRS2dICjFVOUVRdnNNSXhsYmVGdTQxeU40L3hGbWtseTMyT2V4YWVkc0NibTBSUlcwMTE2REdldlkwWElEZUJrTjloU3EKa1k1R1BxcmRaWCttODlDYVFIZmVrTDRLM0V2allPNFBrS2tYS293TVlkcUR2bVgwcjd2bW9UYU43aUFXaHllNApwbE1FY0IwVVpSUlRiN0NFNVJJL1pLd3FjZVV4S2E1UEU2YTV4MTRoQ0RmV1pEYnFKT3ZLUmpmLzhmU2NhY2tjCjFrTjRpMGpTeTYwUFRiWi9WNXBWWXZuY2JBVHlBamM4ZkFZMUF4bGZNUmQ2TW14a0o3Y1FCNlJpWStFUVNGcU4KVWNEdVVyRUNnWUVBMW9UNDVNOThWOU45cU1JVk9aT2NzMlpuUjBOM29pbXVjRDVTeVI2MXdtWitEbDVQbDFnVQpPSWJ5YVBPL1NCNFBvc1U2cmN0QVdtZXdDWjVmNm9IL0RhL3lsdHlzandiM1YrdVUrbXB1bnZHa2pOS0lxZXgrCnVxS0REUkxkNG1vZTNrVUtGcUhWVnc4VldqUkhjK1JxYWJnZXZ1K0lnQWFZZks3S0dwUHd1RkVDZ1lFQS9zd00KSCt2cWxxN1I2aEhtS1E0VVJqaVR2RDl3Tk9mbkxSUVhIdVJMd3RZeUE5RG13ZjNPUktlUlY0akhDSWpDbmRtTQpSVUtCSG5xSWNZL3lnVVd2ZDdxSnZucEJkNzdHV1lLTEoxK1cvditVQkpXZjdkRWxRU1lVOCtpVjAvc0xHblJBCllBbXM4Q2dCTDZCVkkxZHpla1Z2dEdJZ0wrR3lJaEM0VysyZ1hHc0NnWUFNOXlKMzZkWnhGSFkrMGVRb2c3UnYKMzF1VW9nNUQvZEx1TThZYkk4RUdpOTFJandpdWRBTmMyME1oZHNIejRPVS9DRDZnckcwcVNhUUpJTXBaU1J3YQpQcTBoMHhxVzFtdnlvMmx3clNnY2NTeHAybnVxRVlJalU1a3FIQjdQQld6eU1DZ0k4Q1VOeXZxV1poeC9jNm0rCjFBTC90VWlCdkdSUS9OdDRPY0xOMFFLQmdHWGFmNFpMTW0ybDJMZnZDOGlobmkwcjlMS3QwVmIwMVE3S0Z5djgKS3VUcDV2aHJpN05FbUM0TnBpWU53VEtDS1BvY3V0djg1OHlkUXVuU2x5aGlDUENkbXU2UHhKZnZwUzZtNXFXSQpxcjJvd1N6TCt6Q0FDSnB3ZExQRDZCRGpLOThaVlpxT2c1bEZCS1JiUFcxeFNmSTR5NXlhRlMvTzB2eVhIbnR4CkZFZWRBb0dCQUpNSDFuN29LVWZzNmIwc0ZDbjNZZlU1bHJzZmNzWDdqaXF5RHRlYUpsaXR5d2R4UWFPVVduNUQKTUVzMzhJTXhpSUI5ZTBFMG1IazRuT3ZCSThmd0txTzlmNGJPdmRxSEZjbXVzdVVoaUUwWDJrSVhZUzdycTRNSgppb24wNkplZnh5L25aakZ5Ly8rKzhuOXRzeUNQR2Y2RGN5MVM2d0N5NWdQbDg4d1c5TmJJCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
- path: /usr/local/bin/deploycluster.sh
encoding: base64
owner: root:root
permissions: '0777'
content: |
CnNldCAtZSAteAoKZXhlYyAxPi92YXIvbG9nL2NsdXN0ZXJfZGVwbG95LmxvZyAyPiYxCgpwdWJsaWNfY29tbW9uX2xvZygpewogICAgICAgIGVjaG8gJChkYXRlICsiWyVZJW0lZCAlSDolTTolU106ICIpICQxID4+IC92YXIvbG9nL2NsdXN0ZXJfZGVwbG95LmxvZwp9CgpCT09UUEtHPS91c3IvbG9jYWwvYmluCmNkICRCT09UUEtHCgpldGgwaXBmbGFnPTAKd2hpbGUgWyAkZXRoMGlwZmxhZyA9PSAwIF07IGRvCiAgICAgICAgZXRoMGlwPSQoaWZjb25maWcgZXRoMCB8IGF3ayAnL2luZXQvIHtwcmludCAkMn0nIHwgY3V0IC1mMiAtZCAiOiIgfGF3ayAnTlI9PTEge3ByaW50ICQxfScpCiAgICAgICAgaWYgWyAhIC16ICRldGgwaXAgXTsgdGhlbgogICAgICAgICAgICAgICAgZXRoMGlwZmxhZz0xCiAgICAgICAgICAgICAgICBpZmNvbmZpZyBldGgwIG10dSAxNTAwIHVwCiAgICAgICAgZmkKZG9uZQoKCmZ1bmN0aW9uIGZpbGVzZXJ2ZXJfY3VybCgpIHsKICB0YXJnZXQ9JDEKICB0b25nPTEKICB0aW1lb3V0PTUKICB3aGlsZSBbICR0b25nID09IDEgXTsgZG8KICAgIHJldF9jb2RlPSQoY3VybCAtSSAtZyAtcyAtLWNvbm5lY3QtdGltZW91dCAkdGltZW91dCAkdGFyZ2V0IC13ICV7aHR0cF9jb2RlfSB8IHRhaWwgLW4xKQoKICAgIGlmIFsgIngkcmV0X2NvZGUiID0gIngyMDAiIF07IHRoZW4KICAgICAgdG9uZz0wCiAgICAgIGVjaG8gImN1cmwgJHRhcmdldCBpcyBvayIKICAgIGVsc2UKICAgICAgZWNobyAid2lsbCBjdXJsICR0YXJnZXQgYWZ0ZXIgNXMgIgogICAgICBzbGVlcCA1CiAgICBmaQogIGRvbmUKfQoKbW9kcHJvYmUgaXBfdnNfcnIKbW9kcHJvYmUgaXBfdnNfd3JyCm1vZHByb2JlIGlwX3ZzX3NoCnNldCArZSAreAptb2Rwcm9iZSBuZl9jb25udHJhY2tfaXB2NAptb2Rwcm9iZSBuZl9jb25udHJhY2sKc2V0IC1lIC14CgoKCnByZXBhcmVfaW5zdGFsbF9wa2coKXsKICAgICAgICBQS0dfVFlQRT0kMQogICAgICAgIFBLR19WRVJTSU9OPSQyCglta2RpciAtcCBwYWNrLwogICAgICAgIHB1YmxpY19jb21tb25fbG9nICJEb3dubG9hZCBwa2cgZnJvbSAke0ZJTEVfU0VSVkVSfSIKICAgICAgICBpZiBbICEgLWYgcGFjay8ke1BLR19UWVBFfS0ke1BLR19WRVJTSU9OfS50YXIuZ3ogXTsgdGhlbgogICAgICAgICAgICAgICAgZm9yICgoaW50ZWdlciA9IDA7IGludGVnZXIgPCAzOyBpbnRlZ2VyKyspKTsgZG8KICAgICAgICAgICAgICAgICAgICAgICAgaWYgWyAteiAkRklMRV9TRVJWRVIgXTsgdGhlbgogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHB1YmxpY19jb21tb25fbG9nICJQS0dfRklMRV9TRVJWRVIgaXMgbm90IGNvbmZpZy4iCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZXhpdCAxCiAgICAgICAgICAgICAgICAgICAgICAgIGZpCiAgICAgICAgICAgICAgICAgICAgICAgIGN1cmwgLWcgLS1yZXRyeSA1IC0tcmV0cnktZGVsYXkgMiAkRklMRV9TRVJWRVIvc2hhcmUvJHtQS0dfVFlQRX0tJHtQS0dfVkVSU0lPTn0udGFyLmd6IFwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA+IHBhY2svJHtQS0dfVFlQRX0tJHtQS0dfVkVSU0lPTn0udGFyLmd6IHx8IChwdWJsaWNfY29tbW9uX2xvZyAiZG93bmxvYWQgZmFpbGVkIHdpdGggNCByZXRyeSxleGl0IDEiICYmIGV4aXQpCiAgICAgICAgICAgICAgICAgICAgICAgIGlmICEgdGFyIC14dmYgcGFjay8ke1BLR19UWVBFfS0ke1BLR19WRVJTSU9OfS50YXIuZ3o7IHRoZW4KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBpZiBbICIkaW50ZWdlciIgPT0gIjIiIF07IHRoZW4KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHB1YmxpY19jb21tb25fbG9nICJGYWlsZWQgdG8gZG93bmxvYWQgcGtnIGZpbGUuIiAmJiBleGl0IDEKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBmaQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHB1YmxpY19jb21tb25fbG9nICJVbnRhciAke1BLR19WRVJTSU9OfS50YXIuZ3ogZmFpbGVkISwgcmV0cnkiCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcm0gLXJmICR7UEtHX1RZUEV9LSR7UEtHX1ZFUlNJT059LnRhci5negogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHNsZWVwIDUKICAgICAgICAgICAgICAgICAgICAgICAgZWxzZQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHJldHVybiAwCiAgICAgICAgICAgICAgICAgICAgICAgIGZpCiAgICAgICAgICAgICAgICBkb25lCiAgICAgICAgZmkKICAgICAgICBwdWJsaWNfY29tbW9uX2xvZyAiRmluaXNoZWQgZG93bmxvYWQgdGhlIHBrZzogJFBLR19UWVBFIgp9CgpwdWJsaWNfY29tbW9uX3BhcnNlX2FyZ3MoKSB7CiAgICAgICAgd2hpbGUKICAgICAgICAgICAgICAgIFtbICQjIC1ndCAwIF1dCiAgICAgICAgZG8KICAgICAgICAgICAgICAgIGtleT0iJDEiCgogICAgICAgICAgICAgICAgY2FzZSAka2V5IGluCiAgICAgICAgICAgICAgICAtLWZpbGUtc2VydmVyKQogICAgICAgICAgICAgICAgICAgICAgICBleHBvcnQgRklMRV9TRVJWRVI9JDIKICAgICAgICAgICAgICAgICAgICAgICAgc2hpZnQKICAgICAgICAgICAgICAgICAgICAgICAgOzsKICAgICAgICAgICAgICAgICopCiAgICAgICAgICAgICAgICAgICAgICAgICMgdW5rbm93biBvcHRpb24KICAgICAgICAgICAgICAgICAgICAgICAgcHVibGljX2NvbW1vbl9sb2cgInVua25vdyBvcHRpb24gWyRrZXldIgogICAgICAgICAgICAgICAgICAgICAgICA7OwogICAgICAgICAgICAgICAgZXNhYwogICAgICAgICAgICAgICAgc2hpZnQKICAgICAgICBkb25lCn0KCm1haW4oKXsKCXB1YmxpY19jb21tb25fbG9nICJTdGFydCBpbnN0YWxsIGs4cyBzY3JpcHQuLi4iCglwdWJsaWNfY29tbW9uX3BhcnNlX2FyZ3MgIiRAIgoJZmlsZXNlcnZlcl9jdXJsICRGSUxFX1NFUlZFUgoJcHJlcGFyZV9pbnN0YWxsX3BrZyAiZWNsb3VkLWs4cy1zY3JpcHQiICJ2MS42LjUiCglwdWJsaWNfY29tbW9uX2xvZyAiU2NyaXB0IGhhcyBiZWVuIGRvd25sb2FkZWQuIgp9CgpoZWxwPSIKVXNhZ2U6CiIkMCIKLS1maWxlLXNlcnZlciBodHRwOi8vMTAuMTQyLjExMy44OAoiCm1haW4gIiRAIgo=
- path: /etc/kubeadm/kubeadm.cfg
encoding: base64
owner: root:root
permissions: '0640'
content: |
LS0tCmFwaVNlcnZlcjoKICBjZXJ0U0FOczoKICAtIGFwaXNlcnZlci5jbHVzdGVyLmxvY2FsCiAgLSAxMjcuMC4wLjEKICAtIDo6MQogIC0gMTkyLjAuMC4xCiAgLSBrOHMtdGVzdC5pdGhvLmNuCiAgZXh0cmFBcmdzOgogICAgYXVkaXQtbG9nLWZvcm1hdDoganNvbgogICAgYXVkaXQtbG9nLW1heGFnZTogIjciCiAgICBhdWRpdC1sb2ctbWF4YmFja3VwOiAiMTAiCiAgICBhdWRpdC1sb2ctbWF4c2l6ZTogIjEwMCIKICAgIGF1ZGl0LWxvZy1wYXRoOiAvdmFyL2xvZy9rdWJlcm5ldGVzL2F1ZGl0LmxvZwogICAgYXVkaXQtcG9saWN5LWZpbGU6IC9ldGMva3ViZXJuZXRlcy9hdWRpdC1wb2xpY3kueW1sCiAgICBkZWZhdWx0LW5vdC1yZWFkeS10b2xlcmF0aW9uLXNlY29uZHM6ICIyNDAiCiAgICBkZWZhdWx0LXVucmVhY2hhYmxlLXRvbGVyYXRpb24tc2Vjb25kczogIjI0MCIKICAgIGRpc2FibGUtYWRtaXNzaW9uLXBsdWdpbnM6IExpY2Vuc2UKICAgIGVuYWJsZS1hZG1pc3Npb24tcGx1Z2luczogTm9kZVJlc3RyaWN0aW9uLERlZmF1bHRUb2xlcmF0aW9uU2Vjb25kcyxQb2ROb2RlU2VsZWN0b3IKICAgIGV0Y2QtY29tcGFjdGlvbi1pbnRlcnZhbDogMTVtCiAgICBmZWF0dXJlLWdhdGVzOiBMb2FkQmFsYW5jZXJJUE1vZGU9dHJ1ZQogICAgZ29hd2F5LWNoYW5jZTogIjAuMDAxIgogICAgbWF4LW11dGF0aW5nLXJlcXVlc3RzLWluZmxpZ2h0OiAiMTAwMDAiCiAgICBtYXgtcmVxdWVzdHMtaW5mbGlnaHQ6ICIxMDAwMDAiCiAgICBwcm9maWxpbmc6ICJmYWxzZSIKICAgIHNlcnZpY2Utbm9kZS1wb3J0LXJhbmdlOiAzMDAwMC0zMjc2NwogICAgdGxzLWNpcGhlci1zdWl0ZXM6IFRMU19FQ0RIRV9FQ0RTQV9XSVRIX0FFU18xMjhfR0NNX1NIQTI1NixUTFNfRUNESEVfRUNEU0FfV0lUSF9BRVNfMjU2X0dDTV9TSEEzODQsVExTX0VDREhFX0VDRFNBX1dJVEhfQ0hBQ0hBMjBfUE9MWTEzMDUsVExTX0VDREhFX1JTQV9XSVRIX0FFU18xMjhfR0NNX1NIQTI1NixUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzI1Nl9HQ01fU0hBMzg0LFRMU19FQ0RIRV9SU0FfV0lUSF9DSEFDSEEyMF9QT0xZMTMwNQogIGV4dHJhVm9sdW1lczoKICAtIGhvc3RQYXRoOiAvZXRjL2t1YmVybmV0ZXMvYXVkaXQtcG9saWN5LnltbAogICAgbW91bnRQYXRoOiAvZXRjL2t1YmVybmV0ZXMvYXVkaXQtcG9saWN5LnltbAogICAgbmFtZTogYXVkaXQKICAgIHBhdGhUeXBlOiBGaWxlT3JDcmVhdGUKICAgIHJlYWRPbmx5OiB0cnVlCiAgLSBob3N0UGF0aDogL3Zhci9sb2cva3ViZXJuZXRlcwogICAgbW91bnRQYXRoOiAvdmFyL2xvZy9rdWJlcm5ldGVzCiAgICBuYW1lOiBhdWRpdC1sb2cKICAgIHBhdGhUeXBlOiBEaXJlY3RvcnlPckNyZWF0ZQogIHRpbWVvdXRGb3JDb250cm9sUGxhbmU6IDEwbTBzCmFwaVZlcnNpb246IGt1YmVhZG0uazhzLmlvL3YxYmV0YTMKY2x1c3Rlck5hbWU6IGs4cy10ZXN0CmNvbnRyb2xQbGFuZUVuZHBvaW50OiBhcGlzZXJ2ZXIuY2x1c3Rlci5sb2NhbDo2NDQzCmNvbnRyb2xsZXJNYW5hZ2VyOgogIGV4dHJhQXJnczoKICAgIGJpbmQtYWRkcmVzczogMC4wLjAuMAogICAgY2x1c3Rlci1zaWduaW5nLWR1cmF0aW9uOiA4NzYwMDBoMG0wcwogICAgY29uY3VycmVudC1kZXBsb3ltZW50LXN5bmNzOiAiNTAiCiAgICBjb25jdXJyZW50LWVuZHBvaW50LXN5bmNzOiAiNTAiCiAgICBjb25jdXJyZW50LW5hbWVzcGFjZS1zeW5jczogIjEwMCIKICAgIGNvbmN1cnJlbnQtcmVwbGljYXNldC1zeW5jczogIjUwIgogICAgY29uY3VycmVudC1zZXJ2aWNlLXN5bmNzOiAiMTAwIgogICAgY29uY3VycmVudC1zdGF0ZWZ1bHNldC1zeW5jczogIjIwMCIKICAgIGt1YmUtYXBpLWJ1cnN0OiAiNTAwIgogICAga3ViZS1hcGktcXBzOiAiNTAwIgogICAgbm9kZS1tb25pdG9yLWdyYWNlLXBlcmlvZDogMjBzCiAgICBub2RlLW1vbml0b3ItcGVyaW9kOiAycwogICAgcHJvZmlsaW5nOiAiZmFsc2UiCiAgICB0bHMtY2lwaGVyLXN1aXRlczogVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzEyOF9HQ01fU0hBMjU2LFRMU19FQ0RIRV9FQ0RTQV9XSVRIX0FFU18yNTZfR0NNX1NIQTM4NCxUTFNfRUNESEVfRUNEU0FfV0lUSF9DSEFDSEEyMF9QT0xZMTMwNSxUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzEyOF9HQ01fU0hBMjU2LFRMU19FQ0RIRV9SU0FfV0lUSF9BRVNfMjU2X0dDTV9TSEEzODQsVExTX0VDREhFX1JTQV9XSVRIX0NIQUNIQTIwX1BPTFkxMzA1CmRuczoge30KZXRjZDoKICBsb2NhbDoKICAgIGV4dHJhQXJnczoKICAgICAgYXV0by1jb21wYWN0aW9uLXJldGVudGlvbjogIjEiCiAgICAgIGNpcGhlci1zdWl0ZXM6IFRMU19FQ0RIRV9FQ0RTQV9XSVRIX0FFU18xMjhfR0NNX1NIQTI1NixUTFNfRUNESEVfRUNEU0FfV0lUSF9BRVNfMjU2X0dDTV9TSEEzODQsVExTX0VDREhFX0VDRFNBX1dJVEhfQ0hBQ0hBMjBfUE9MWTEzMDUsVExTX0VDREhFX1JTQV9XSVRIX0FFU18xMjhfR0NNX1NIQTI1NixUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzI1Nl9HQ01fU0hBMzg0LFRMU19FQ0RIRV9SU0FfV0lUSF9DSEFDSEEyMF9QT0xZMTMwNQogICAgICBlbGVjdGlvbi10aW1lb3V0OiAiNTAwMCIKICAgICAgaGVhcnRiZWF0LWludGVydmFsOiAiNTAwIgogICAgICBsaXN0ZW4tbWV0cmljcy11cmxzOiBodHRwOi8vMC4wLjAuMDoyMzgxCiAgICAgIG1heC1yZXF1ZXN0LWJ5dGVzOiAiMTA0ODU3NjAiCiAgICAgIHF1b3RhLWJhY2tlbmQtYnl0ZXM6ICI4NTg5OTM0NTkyIgogICAgICBzbmFwc2hvdC1jb3VudDogIjEwMDAwMCIKaW1hZ2VSZXBvc2l0b3J5OiBjaXMtaHViLWh1YWRvbmctNy5jbWVjbG91ZC5jbi9lY2xvdWQKa2luZDogQ2x1c3RlckNvbmZpZ3VyYXRpb24Ka3ViZXJuZXRlc1ZlcnNpb246IHYxLjI5LjUtZWtpLjQuMS4wCm5ldHdvcmtpbmc6CiAgZG5zRG9tYWluOiBjbHVzdGVyLmxvY2FsCiAgcG9kU3VibmV0OiAxNzIuMjAuMC4wLzE2CiAgc2VydmljZVN1Ym5ldDogMTAuMjMzLjAuMC8xOApzY2hlZHVsZXI6CiAgZXh0cmFBcmdzOgogICAgYmluZC1hZGRyZXNzOiAwLjAuMC4wCiAgICBrdWJlLWFwaS1idXJzdDogIjUwMCIKICAgIGt1YmUtYXBpLXFwczogIjUwMCIKICAgIHByb2ZpbGluZzogImZhbHNlIgogICAgdGxzLWNpcGhlci1zdWl0ZXM6IFRMU19FQ0RIRV9FQ0RTQV9XSVRIX0FFU18xMjhfR0NNX1NIQTI1NixUTFNfRUNESEVfRUNEU0FfV0lUSF9BRVNfMjU2X0dDTV9TSEEzODQsVExTX0VDREhFX0VDRFNBX1dJVEhfQ0hBQ0hBMjBfUE9MWTEzMDUsVExTX0VDREhFX1JTQV9XSVRIX0FFU18xMjhfR0NNX1NIQTI1NixUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzI1Nl9HQ01fU0hBMzg0LFRMU19FQ0RIRV9SU0FfV0lUSF9DSEFDSEEyMF9QT0xZMTMwNQoKLS0tCmFwaVZlcnNpb246IGt1YmVhZG0uazhzLmlvL3YxYmV0YTMKY2VydGlmaWNhdGVLZXk6IDU2MTRlN2MxNTQ0OWFlNDI5MDQ0YzE1YTJlYTg4NmY2MGQzOWZkZGM1ZTEzYzg5OTIzODM3N2M0YjRkMmUyN2YKa2luZDogSW5pdENvbmZpZ3VyYXRpb24KbG9jYWxBUElFbmRwb2ludDoge30Kbm9kZVJlZ2lzdHJhdGlvbjoKICBjcmlTb2NrZXQ6IC9ydW4vY29udGFpbmVyZC9jb250YWluZXJkLnNvY2sKICBrdWJlbGV0RXh0cmFBcmdzOgogICAgY2xvdWQtcHJvdmlkZXI6IGV4dGVybmFsCiAgICBjb250YWluZXItcnVudGltZS1lbmRwb2ludDogdW5peDovLy9ydW4vY29udGFpbmVyZC9jb250YWluZXJkLnNvY2sKICAgIGhvc3RuYW1lLW92ZXJyaWRlOiBrY3MtazhzLXRlc3QtbS10cDg4cwogICAgbWF4LXBvZHM6ICIxMjgiCiAgICBub2RlLWlwOiAnOjonCiAgICBub2RlLWxhYmVsczogbWFjaGluZS5lY2xvdWQuY21zcy5jb20vbm9kZS10eXBlPWNlbnRlcixtYWNoaW5lLmVjbG91ZC5jbXNzLmNvbS9tYWNoaW5lLXJlZ2lvbj1OMDU3NC1aSi1OQlpEMDEsbm9kZS5rdWJlcm5ldGVzLmlvL2Nsb3VkPSxtYWNoaW5lLmVjbG91ZC5jbXNzLmNvbS9tYWNoaW5lLW5hbWU9azhzLXRlc3QtY29udHJvbHBsYW5lLWxtcGwyLG1hY2hpbmUuZWNsb3VkLmNtc3MuY29tL21hY2hpbmUtdHlwZT1WTSxtYWNoaW5lLmVjbG91ZC5jbXNzLmNvbS9zcGVjc25hbWU9YzUuMnhsYXJnZS4yCgotLS0KYXBpVmVyc2lvbjoga3ViZXByb3h5LmNvbmZpZy5rOHMuaW8vdjFhbHBoYTEKY2xpZW50Q29ubmVjdGlvbjoKICBidXJzdDogMTAwCiAgcXBzOiAxMDAKZmVhdHVyZUdhdGVzOgogIExvYWRCYWxhbmNlcklQTW9kZTogdHJ1ZQpob3N0bmFtZU92ZXJyaWRlOiBrY3MtazhzLXRlc3QtbS10cDg4cwppcHRhYmxlczoge30KaXB2czoKICBleGNsdWRlQ0lEUnM6CiAgLSAxOTIuMC4wLjEvMzIKa2luZDogS3ViZVByb3h5Q29uZmlndXJhdGlvbgptZXRyaWNzQmluZEFkZHJlc3M6IDAuMC4wLjA6MTAyNDkKbW9kZTogaXB2cwpwb3J0UmFuZ2U6ICIiCndpbmtlcm5lbDoge30KCi0tLQphcGlWZXJzaW9uOiBrdWJlbGV0LmNvbmZpZy5rOHMuaW8vdjFiZXRhMQphdXRoZW50aWNhdGlvbjoKICBhbm9ueW1vdXM6IHt9CiAgd2ViaG9vazoKICAgIGNhY2hlVFRMOiAwcwogIHg1MDk6IHt9CmF1dGhvcml6YXRpb246CiAgd2ViaG9vazoKICAgIGNhY2hlQXV0aG9yaXplZFRUTDogMHMKICAgIGNhY2hlVW5hdXRob3JpemVkVFRMOiAwcwpjZ3JvdXBEcml2ZXI6IGNncm91cGZzCmNvbnRhaW5lckxvZ01heEZpbGVzOiA1CmNvbnRhaW5lckxvZ01heFNpemU6IDEwTWkKY3B1TWFuYWdlclJlY29uY2lsZVBlcmlvZDogMHMKZXZpY3Rpb25IYXJkOgogIGltYWdlZnMuYXZhaWxhYmxlOiAxNSUKICBtZW1vcnkuYXZhaWxhYmxlOiAxMDBNaQogIG5vZGVmcy5hdmFpbGFibGU6IDEwJQogIG5vZGVmcy5pbm9kZXNGcmVlOiA1JQpldmljdGlvblByZXNzdXJlVHJhbnNpdGlvblBlcmlvZDogNW0wcwpmaWxlQ2hlY2tGcmVxdWVuY3k6IDBzCmh0dHBDaGVja0ZyZXF1ZW5jeTogMHMKaW1hZ2VHQ0hpZ2hUaHJlc2hvbGRQZXJjZW50OiA4NQppbWFnZUdDTG93VGhyZXNob2xkUGVyY2VudDogODAKaW1hZ2VNaW5pbXVtR0NBZ2U6IDBzCmtpbmQ6IEt1YmVsZXRDb25maWd1cmF0aW9uCmt1YmVBUElCdXJzdDogMTAwCmt1YmVBUElRUFM6IDEwMAprdWJlUmVzZXJ2ZWQ6CiAgY3B1OiAxODBtCiAgbWVtb3J5OiAyLjYwRwpub2RlU3RhdHVzUmVwb3J0RnJlcXVlbmN5OiAwcwpub2RlU3RhdHVzVXBkYXRlRnJlcXVlbmN5OiAwcwpyb3RhdGVDZXJ0aWZpY2F0ZXM6IHRydWUKcnVudGltZVJlcXVlc3RUaW1lb3V0OiAwcwpzZXJpYWxpemVJbWFnZVB1bGxzOiBmYWxzZQpzdHJlYW1pbmdDb25uZWN0aW9uSWRsZVRpbWVvdXQ6IDBzCnN5bmNGcmVxdWVuY3k6IDBzCnRsc0NpcGhlclN1aXRlczoKLSBUTFNfRUNESEVfRUNEU0FfV0lUSF9BRVNfMTI4X0dDTV9TSEEyNTYKLSBUTFNfRUNESEVfRUNEU0FfV0lUSF9BRVNfMjU2X0dDTV9TSEEzODQKLSBUTFNfRUNESEVfRUNEU0FfV0lUSF9DSEFDSEEyMF9QT0xZMTMwNQotIFRMU19FQ0RIRV9SU0FfV0lUSF9BRVNfMTI4X0dDTV9TSEEyNTYKLSBUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzI1Nl9HQ01fU0hBMzg0Ci0gVExTX0VDREhFX1JTQV9XSVRIX0NIQUNIQTIwX1BPTFkxMzA1CnZvbHVtZVN0YXRzQWdnUGVyaW9kOiAwcwo=
runcmd:
- echo 'root:$6$76/O5ToF$GkXliQEhjTWy4sO0sD7RMPKQzGGf8/Sjoycw9BbO/aGKQwLJjXU59WmeLxZ8qTj7./D09DeVhcWY82DBDZKlw1' | chpasswd -e
- 'deploycluster.sh --file-server 10.195.207.205:32092'
- 'kuberun.sh --m1Host 127.0.0.1 --kubeletCgroupDriver cgroupfs --npuNode false --dualStack false --role deploy-masters --starwayIpStack NULL --user NULL --imageServerPort 443 --fileServer 10.195.207.205:32092 --apiServerVIP 192.0.0.1 --gpuSchedule NULL --podCIDR 172.20.0.0/16 --mtuValue 1500 --spec c5.2xlarge.2 --serviceNodePortRange 30000-32767 --v4Enable true --ipv6SingleStack false --nodeRuntime containerd --v6Enable true --dockerIOAccPort 7999 --v6PrefixLen 64 --calicoDualStack false --imageServerIP 10.195.207.201 --cisNameServer NULL --timeServer 114.118.7.163,10.215.242.54,10.215.242.55 --kubeVersion v1.29.5-eki.4.1.0 --clusterNetworkMode calico --clusterRuntime containerd --imageRepo cis-hub-huadong-7.cmecloud.cn --dualStackIpv4First false'
0x03 容器服务 CKS 部署脚本详解
解读 deploycluster.sh
来看下这个deploycluster.sh到底做了什么操作
[root@kcs-k8s-test-m-tp88s ~]# cat /usr/local/bin/deploycluster.sh
set -e -x
exec 1>/var/log/cluster_deploy.log 2>&1
public_common_log(){
echo $(date +"[%Y%m%d %H:%M:%S]: ") $1 >> /var/log/cluster_deploy.log
}
BOOTPKG=/usr/local/bin
cd $BOOTPKG
eth0ipflag=0
while [ $eth0ipflag == 0 ]; do
eth0ip=$(ifconfig eth0 | awk '/inet/ {print $2}' | cut -f2 -d ":" |awk 'NR==1 {print $1}')
if [ ! -z $eth0ip ]; then
eth0ipflag=1
ifconfig eth0 mtu 1500 up
fi
done
function fileserver_curl() {
target=$1
tong=1
timeout=5
while [ $tong == 1 ]; do
ret_code=$(curl -I -g -s --connect-timeout $timeout $target -w %{http_code} | tail -n1)
if [ "x$ret_code" = "x200" ]; then
tong=0
echo "curl $target is ok"
else
echo "will curl $target after 5s "
sleep 5
fi
done
}
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
set +e +x
modprobe nf_conntrack_ipv4
modprobe nf_conntrack
set -e -x
prepare_install_pkg(){
PKG_TYPE=$1
PKG_VERSION=$2
mkdir -p pack/
public_common_log "Download pkg from ${FILE_SERVER}"
if [ ! -f pack/${PKG_TYPE}-${PKG_VERSION}.tar.gz ]; then
for ((integer = 0; integer < 3; integer++)); do
if [ -z $FILE_SERVER ]; then
public_common_log "PKG_FILE_SERVER is not config."
exit 1
fi
curl -g --retry 5 --retry-delay 2 $FILE_SERVER/share/${PKG_TYPE}-${PKG_VERSION}.tar.gz \
> pack/${PKG_TYPE}-${PKG_VERSION}.tar.gz || (public_common_log "download failed with 4 retry,exit 1" && exit)
if ! tar -xvf pack/${PKG_TYPE}-${PKG_VERSION}.tar.gz; then
if [ "$integer" == "2" ]; then
public_common_log "Failed to download pkg file." && exit 1
fi
public_common_log "Untar ${PKG_VERSION}.tar.gz failed!, retry"
rm -rf ${PKG_TYPE}-${PKG_VERSION}.tar.gz
sleep 5
else
return 0
fi
done
fi
public_common_log "Finished download the pkg: $PKG_TYPE"
}
public_common_parse_args() {
while
[[ $# -gt 0 ]]
do
key="$1"
case $key in
--file-server)
export FILE_SERVER=$2
shift
;;
*)
# unknown option
public_common_log "unknow option [$key]"
;;
esac
shift
done
}
main(){
public_common_log "Start install k8s script..."
public_common_parse_args "$@"
fileserver_curl $FILE_SERVER
prepare_install_pkg "ecloud-k8s-script" "v1.6.5"
public_common_log "Script has been downloaded."
}
help="
Usage:
"$0"
--file-server http://10.142.113.88
"
main "$@"
1、全局日志劫持
脚本一开始就通过exec 1>... 2>&1将后续所有标准输出和错误流重定向到 /var/log/cluster_deploy.log。因为在 cloud-init 执行阶段你是看不到终端回显的,这样做是为了排障。如果哪天节点拉起失败,你登录机器看这个日志就能知道卡在哪步。
2、死循环等网卡 Ready
虚拟机启动时,DHCP 获取 IP 可能需要几秒钟。脚本用了一个 while 循环死等 eth0 拿到 IP。拿到 IP 后,强制执行ifconfig eth0 mtu 1500 up。
3、预加载 IPVS 相关的内核模块
在之前的 kubeadm.cfg 里我们看到了 kube-proxy 使用的是 ipvs 模式。所以在这里,脚本提前通过 modprobe 加载了:ip_vs_rr (轮询调度),ip_vs_wrr (加权轮询),ip_vs_sh (源地址哈希),nf_conntrack (连接跟踪模块,K8s Service NAT 必依赖项),这一步确保了后续 kube-proxy 启动时不会因为缺少内核模块而 Crash。
4、下载核心 payload
在上一轮贴出的 user_data 的 runcmd:deploycluster.sh --file-server 10.195.207.205:32092
主函数拿到传入的文件服务器 IP 后,干了这几件事:
探活:fileserver_curl 先去 curl 这个 HTTP Server 的 Header,死等它返回 200 OK。
下载:去拉取 http://10.195.207.205:32092/share/ecloud-k8s-script-v1.6.5.tar.gz 为了防止网络抖动,用了带有重试的 curl --retry 并且外层还包了一个 3 次重试的 for 循环。
解压:解压这个 tar 包到当前目录 /usr/local/bin。
当这个上面命令执行完毕并exit 0时,K8s 的专属部署工具集已经被解压到/usr/local/bin里了。紧接着,user_data 的 runcmd 就会无缝执行下一条命令kuberun.sh。
解读ecloud-k8s-script项目
把ecloud-k8s-script-v1.6.5.tar.gz解压后,整体项目结构如下:
myluzh@myluzhMacBookPro ecloud-k8s-script-v1.6.5 % tree
.
├── kcs_install # KCS 部署核心代码目录
│ ├── common_0_log_gather.sh # 负责后台异步收集部署日志,便于排查错误
│ ├── common_0_utils.sh # 通用工具函数(下载、解压、重试逻辑、FileServer 检查)
│ ├── common_1_system.sh # 系统层配置(OS 优化、内核参数 sysctl、网卡/磁盘初始化、关闭 Swap)
│ ├── common_2_runtime.sh # 容器运行时部署(Docker/Containerd/Kata 及其配置加载)
│ ├── common_3_rpm_deb.sh # 适配不同 OS 的包管理器脚本(RPM/DEB 自动切换安装)
│ ├── CVE_2016_2183_kcs.sh # 安全加固脚本(修复 TLS 漏洞、强化加密套件、针对 2183 漏洞处理)
│ ├── eki_config.sh # 针对 EKI(移动云 K8s 引擎)的特定环境配置
│ ├── ethX # 静态网络接口模板目录
│ │ ├── ifcfg-eth0 # 网卡 0 配置模板
│ │ ├── ifcfg-eth1 # 网卡 1 配置模板
│ │ └── ifcfg-eth2 # 网卡 2 配置模板
│ ├── kcs_conf # K8s 及组件的核心配置文件目录
│ │ ├── 10-kubeadm.conf # Kubeadm 的 drop-in 单元文件(kubelet 参数覆盖)
│ │ ├── 98-k8s-nofile.conf # 内核文件描述符(nofile)限制调优配置
│ │ ├── audit-policy.yml # K8s API 审计策略(审计哪些事件、保留多久)
│ │ ├── backup # 备份脚本与默认配置文件存档
│ │ │ ├── 20240813_config80.toml # 某个特定日期的配置文件存档
│ │ │ ├── arm_defautl_config.toml # ARM 架构默认配置
│ │ │ ├── backup_etcd.sh # etcd 数据冷备/快照脚本
│ │ │ ├── backup_k8s_conf.sh # K8s 重要配置(/etc/kubernetes/)备份脚本
│ │ │ └── x86_default_config.toml # x86 架构默认配置
│ │ ├── daemon.json # Docker 引擎配置文件(镜像源、存储驱动、加速等)
│ │ ├── gpu_daemon.json # 支持 GPU 场景的 Docker/Containerd 配置文件
│ │ ├── k8s.conf # K8s 静态配置文件(可能是 sysctl 或特定变量)
│ │ ├── kcs_gpu_config.sh # GPU 节点初始化配置脚本(驱动、toolkit 设置)
│ │ ├── kcs_ipv6_get.sh # IPv4/IPv6 双栈网络中获取 IPv6 的逻辑脚本
│ │ ├── kcs_ubuntu_ipv6_get.sh # 专门适配 Ubuntu 系统的 IPv6 获取脚本
│ │ ├── kubelet.service # Kubelet 的 systemd 启动单元定义
│ │ ├── net_calico_cilium.conf # Calico 或 Cilium 的 CNI 插件网络预设配置
│ │ ├── net_starway.conf # Starway (自研 SDN) 的 CNI 专有配置
│ │ ├── nvidia-persistenced.service # NVIDIA GPU 驱动持久化服务(保持 GPU 唤醒状态)
│ │ ├── toml # Containerd 专用的 TOML 格式配置目录
│ │ │ ├── arm_config.toml # ARM 架构 Containerd 配置
│ │ │ ├── config.toml # 标准 Containerd 默认配置
│ │ │ ├── gpu_config.toml # 挂载 GPU Runtime 的 Containerd 配置
│ │ │ ├── kata_config.toml # 挂载 Kata 安全容器的 Containerd 配置
│ │ │ ├── npu_kcs_config.toml # 华为昇腾 NPU 在 KCS 环境下的配置
│ │ │ └── npu_ubuntu_config.toml # 华为昇腾 NPU 在 Ubuntu 系统下的配置
│ │ └── yum # 本地或私有 Repo 源仓库配置
│ │ ├── bclinux82 # BC-Linux 8.2 的 Repo 模板
│ │ │ ├── BCLinux-AppStream.repo
│ │ │ ├── BCLinux-BaseOS.repo
│ │ │ ├── BCLinux-Kernel.repo
│ │ │ └── BCLinux-PowerTools.repo
│ │ ├── euler22 # BC-Linux Euler 22.10 的 Repo 配置
│ │ │ └── BCLinux.repo
│ │ └── ouler21 # BC-Linux Euler 21.10 的 Repo 配置
│ │ └── BCLinux.repo
│ ├── kubernetes.sh # K8s 安装的主要实现脚本(各个函数调用的逻辑中心)
│ └── patch # K8s 核心组件的补丁或调优配置
│ └── coredns-podAntiAffinity.yaml # CoreDNS 反亲和性补丁(确保高可用分发)
└── kuberun.sh # 部署的入口脚本(用户直接运行的 Binary/Script)
解读kuberun.sh
[root@kcs-k8s-test-m-tp88s /usr/local/bin]# cat kuberun.sh
#!/usr/bin/env bash
############################################################################
# 脚本入口
# -- kuberun.sh
# ---- kubernetes.sh (此处引入相关common_N_XX.sh)
# ------ common_N_XX.sh
### fileserver 目录结构
# /apps/data/ekcs/kcs-fileserver
#.
#├── ecloud-k8s-script-v1.6.2.tar.gz
#├── ecloud-k8s-script-v1.6.3.tar.gz
#├── ecloud-k8s-script-v1.6.5.tar.gz
#├── index.html
#├── kcs_tools
#│ ├── bclinux7.6_fix_NM.sh
#│ ├── bclinux7.X_fix_NM.sh
#│ ├── calicoctl
#│ ├── coreutils-8.22-24.el7.x86_64.rpm
#│ ├── helm
#│ ├── kubectl
#│ ├── sshpass-1.06-2.el7.x86_64.rpm
#│ └── tcpping
#├── rpm_install
#│ ├── kubernetes-v1.18.3.tar.gz
#│ ├── kubernetes-v1.21.5.tar.gz
#│ ├── kubernetes-v1.25.3.tar.gz
#│ └── kubernetes-v1.25.7-eki.3.0.0.tar.gz
#├── rpm_upgrade
#│ ├── fix-upgrade-pod.yaml
#│ ├── kubernetes-v1.15.5.tar.gz
#│ ├── kubernetes-v1.16.14.tar.gz
#│ ├── kubernetes-v1.17.11.tar.gz
#│ ├── kubernetes-v1.18.3.tar.gz
#│ ├── kubernetes-v1.19.16.tar.gz
#│ ├── kubernetes-v1.20.15.tar.gz
#│ ├── kubernetes-v1.20.1.tar.gz
#│ └── kubernetes-v1.21.5.tar.gz
#├── runtime
#│ └── containerd
#│ └── containerd-1.6.15.tar.gz
## 其他说明
# 1、脚本中会判断当前机器操作系统是rpm的,还是deb的, 输出:FILE_PATH=deb_install 、ubuntu_install
# 使用FILE_PATH,$FILE_SERVER/share/$FILE_PATH/${PKG_TYPE}-${PKG_VERSION}.tar.gz
# 2、脚本中会判断当前机器是kvm还是bm, 输出:INTERFACE="eth0"、bond1
############################################################################
#
set -e -x
### 这里是追加写,因为deploycluster.sh 先写
exec 1>>/var/log/cluster_deploy.log 2>&1
echo -e "\n\n ==== deploycluster.sh done ====\n\n"
echo -e "\n\n ==== kuberun.sh begin ====\n\n"
BOOTPKG=/usr/local/bin/kcs_install
cd $BOOTPKG
# source $(
# cd $(dirname ${BASH_SOURCE})
# pwd
# )/kubernetes.sh
source $BOOTPKG/kubernetes.sh
function kcs_1_init_env() {
log_info "kcs_1_init_env"
log_info "kcs_kuberun_args_to_env"
kcs_kuberun_args_to_env "$@"
### env default
log_info "kcs_env_default"
kcs_env_default
}
function kcs_2_init_check_before() {
log_info "kcs_2_init_check_before"
bash_PS1
profile_TMOUT
log_info "check os type"
### 输出:FILE_PATH
system_os_check
log_info "check kvm"
### 输出: INTERFACE、NODE_TYPE
system_kvm_check
}
### log_gather需要环境变量
function kcs_3_log_gather_bg() {
log_info "kcs_3_log_gather_bg"
log_gather_bg &
}
###
function kcs_4_config_before() {
log_info "kcs_4_config_before"
log_info "common system config"
## modify_chrony_conf
## set_sysctl
system_common_config_settings
echo ""
## todo 优化
case $OS_VERSION in
BC-LINUX-7.*)
log_info "os [$OS_VERSION], no more change prefixLen"
## http://jira.cmss.com/browse/EKCS-2203
## 2023.11.28 发现,使用NetworkManager替换network后,不用改了,改了也不生效
## 逻辑暂时先保留吧,坑
# modify_new_ip6_prefixlen_bclinux7
;;
BC-LINUX-8.*)
log_info "os [$OS_VERSION], new machine_id"
new_machine_id_bc_linux_8_eth0
journalctl_fix
;;
*)
log_info "os [$OS_VERSION] do nothing in [system_common_config_settings]"
;;
esac
log_info "check NetworkManager"
log_info "cis && fileserver config route"
## route固化及添加
system_route_config
## NM配置及eth0配置
## todo 这里后续不能 nmcli reload 或者up,或者重启NM,导致v6地址丢失
NetworkManager_config_and_disable_network
public_common_log "check_active_ipv4"
if [[ "$V4_ENABLE" == "true" ]]; then
system_check_ipv4_address_dhclient
else
log_info "V4_ENABLE == false,"
fi
log_info "enable ipv6"
if [[ "$IPV6_FIRST" == "true" ]]; then
system_check_ipv6_address_dhclient6
else
log_info "IPV6_FIRST == false"
fi
log_info "fileserver_curl"
fileserver_curl $FILE_SERVER
log_info "config hostname"
system_config_hostname
log_info "config image_host"
system_config_image_host
log_info "config cisNameServer"
system_config_cis_nameserver
### config dish or not
### vm 安装在云盘
### bm 安装在系统盘
log_info "config cloud disk"
if [[ "vm" == "$NODE_TYPE" ]]; then
log_info "kvm init cloud disk"
system_init_cloud_disk_vm
### EKCS-4273 master的etcd数据依然放到系统盘里,走ssd
### work节点的容器数据,放到数据盘里,走Hdd
### EKCS-4495 判断 bm 的系统盘是否为 /dev/sda
elif [[ "bm" == "$NODE_TYPE" ]] || [[ "ebm" == "$NODE_TYPE" ]]; then
if [[ "deploy-work-nodes" == "$ROLE" ]]; then
root_devcie=$(df -h / | awk NR==2 | awk '{print substr($1,1,8)}')
if [[ "/dev/sda" == "$root_devcie" ]] || [[ "/dev/vda" == "$root_devcie" ]]; then
log_info "bm init cloud disk"
system_init_cloud_disk_bm
else
log_info "root device is $root_devcie ,no support just jump"
fi
fi
fi
## 检查 ipv6 地址会重启 networkmanager 导致再次之前配置的路由失效,固化路由放在最后
## 苹果云 使用 rhel8.8的操作系统 没有 ifup 命令 兼容移动云加个判断
## 固化/etc/sysconfig/network-scripts/route-$INTERFACE下路由
## nmcli c reload && nmcli c up $INTERFACE
## 因为mtu是dhcp下发的,写配置文件里没用,只能临时修改
# ip link show $INTERFACE
# log_info "ifconfig $INTERFACE mtu $MTU_VALUE up "
# ifconfig $INTERFACE mtu $MTU_VALUE up
# ip link show $INTERFACE
}
### 在所有的安装之前,通过移出repo文件,来禁止 yum check-update
### yum check-update 这个进程极其耗时,咨询了操作系统组,这玩意关不掉
function kcs_5_mv_repo() {
log_info "kcs_5_mv_repo"
### 默认set -e ,失败终止
if [[ "rpm_install" == $FILE_PATH ]]; then
for i in $(find /etc/yum.repos.d/ -type f -name "*.repo"); do
mv $i $i.cmss.bak
done
#set +e
#yum clean all
#yum makecache
#set -e
fi
log_info "mv repo done"
}
### pre download from fileserver [background]
function kcs_6_0_pkg_k8s_download_acc() {
log_info "kcs_6_0_pkg_k8s_download_acc"
k8s_pkg_down_and_tar "kubernetes" $KUBE_VERSION &
}
function kcs_6_1_runtime_deploy() {
public_common_log "=== kcs_6_1_runtime_deploy [$CLUSTER_RUNTIME]"
###
case $CLUSTER_RUNTIME in
"docker")
runtime_docker_deploy
;;
"containerd")
runtime_containerd_deploy
;;
"kata")
runtime_kata_deploy
;;
*)
public_common_log "runtime[$CLUSTER_RUNTIME] not support,just exit 1."
exit 1
;;
esac
}
function kcs_7_1_k8s_install() {
### K8S deploy
public_common_log "kcs_7_1_k8s_install"
### download from fileserver
### 实际已提前下载,这里判断有gz包,同时必须判断已经解压完成,出现过解压比较慢,导致后续安装找不到rpm包的问题
k8s_pkg_check_or_down "kubernetes" $KUBE_VERSION
### INSTALL
case $FILE_PATH in
"rpm_install")
rpm_install_kubeadm_by_os
;;
"deb_install")
deb_install_kubeadm
;;
*)
public_common_log "kubeadm_install[$FILE_PATH] not support,just exit 1."
exit 1
;;
esac
echo "kcs_7_1_k8s_install done "
}
function kcs_8_restore_repo_file() {
public_common_log "kcs_8_restore_repo_file"
if [[ "rpm_install" == $FILE_PATH ]]; then
for i in $(find /etc/yum.repos.d/ -type f -name "*.cmss.bak"); do
mv $i ${i%%.cmss*}
done
fi
#### 严格匹配7.5, 配置yum源
if [[ "BC-LINUX-7.5" == "$OS_VERSION" ]]; then
system_config_repo7_5
fi
if [[ "BC-LINUX-7.6" == "$OS_VERSION" ]]; then
system_config_repo7_6
fi
if [[ "BC-LINUX-Euler-21.10" == "$OS_VERSION" ]]; then
system_config_repo_euler_21_10
fi
if [[ "BC-LINUX-Euler-22.10" == "$OS_VERSION" ]]; then
system_config_repo_euler_22_10
fi
if [[ "BC-LINUX-8.2" == "$OS_VERSION" ]]; then
system_config_repo_bc_82
fi
log_info "kcs_8_restore_repo_file done"
}
#iptables 模式
#iptables-save |grep <service-name>
#
#ipvs模式
#ipvsadm -L -n
function kcs_9_k8s_init_or_join() {
public_common_log "kcs_9_k8s_init_or_join"
### DEPLOY
case $ROLE in
"deploy-masters")
### 第一个master
master_1_before_set
public_main_init_master
;;
"deploy-minor-masters")
### 其他master
master_2_3_before_set
public_main_init_minor_master
### 这个时候,localhost的apiserver已经启动了
master_2_3_after_set
;;
"deploy-work-nodes")
### 其他node
node_before_set
public_main_init_node
## wait lvscare pod runing,ipvs rule add then modify to vip
## ipvsadm -Ln|grep 6443 |wc -l
node_after_set
;;
*)
public_common_log "k8s_init_or_join[$ROLE] not support,just exit 1"
exit 1
;;
esac
log_info "kcs_9_k8s_init_or_join done"
}
### MAIN
function main() {
### 后续脚本
### 默认 set -e : 即失败退出
### 默认 set +x : 即不打印明细
################# 手动开启和关闭
#### set +e
#### 执行失败不退出,继续
#### set -e
########
#### set -x
#### 执行,并打印明细
#### set +x
###################
echo "== net.ipv4.tcp_mem == "
sysctl -a |grep net.ipv4.tcp_mem
set +x
kcs_1_init_env "$@"
kcs_2_init_check_before
### log gather
kcs_3_log_gather_bg
### config and init
kcs_4_config_before
### 在所有的安装之前,通过移除repo文件,来禁止 yum check-update
kcs_5_mv_repo
### 不再并行拉文件
### 从1.6.5开始,全部串行,避免日志打的乱
public_common_log "kubernetes [$KUBE_VERSION] download begin"
k8s_pkg_check_or_down "kubernetes" "$KUBE_VERSION"
### runtime deploy
kcs_6_0_k8s_version_set
if [[ "kata" != "$NODE_RUNTIME" ]] ; then
kcs_6_1_runtime_deploy
else
log_info "kata runtime node ,start runtime_kata_deploy"
runtime_kata_deploy
fi
### k8s install
kcs_7_1_k8s_install
### 后续部署可能会失败,故这里先恢复repo,避免遗忘
kcs_8_restore_repo_file
## deploy k8s
kcs_9_k8s_init_or_join
echo ""
kubelet_121_fix
echo ""
## 水果注册
aipo_register
log_info "KCS All Done!"
}
help="
Usage:
"$0"
--image-server-ip 10.123.123.123
--file-server http://10.142.113.88
--role deploy [deploy-masters | deploy-minor-masters | deploy-work-nodes]
"
main "$@"
1. KCS 部署核心流程 (Main Entry)
- 阶段 1: kcs_1_init_env - 环境初始化:参数解析、读取全局变量、默认值设置。
- 阶段 2: kcs_2_init_check_before - 部署前检查:检测 OS 类型、内核版本、虚拟机/物理机类型。
- 阶段 3: kcs_3_log_gather_bg - 日志收集:开启后台进程,实时记录部署全过程。
- 阶段 4: kcs_4_config_before - 系统预配置:网络协议栈、磁盘挂载、主机名与 /etc/hosts。
- 阶段 5: kcs_5_mv_repo - Repo 锁定预防:暂时移除 yum 源,防止 yum lock 导致部署中断。
- 阶段 6: K8s 包下载 - 从远程 FileServer 获取对应版本的离线安装包。
- 阶段 7: kcs_6_1_runtime_deploy - 容器运行时部署:安装 Docker、Containerd 或 Kata。
- 阶段 8: kcs_7_1_k8s_install - K8s 组件安装:分发二进制文件、生成/分发证书及配置文件。
- 阶段 9: kcs_8_restore_repo_file - 环境恢复:还原 yum/apt 源文件。
- 阶段 10: kcs_9_k8s_init_or_join - 集群联动:主节点 init 或从节点 join 加入集群。
- 阶段 11: 后续处理 - 执行 CVE 修复、备份策略、CoreDNS 补丁及监控插件安装。
2. 矩阵支持能力
节点角色定义:deploy-masters (首节点)、deploy-minor-masters (高可用节点)、deploy-work-nodes (计算节点)。
操作系统兼容性:BC-Linux (7.5/7.6/8.2)、BC-Linux Euler (21.10/22.10)、Ubuntu (20.04/22.04)、Kylin V10、Anolis 8.10。
技术栈支持:Containerd (推荐)、Docker、Kata Containers;支持 Calico、Cilium 及移动云自研 Starway 网络;适配 NVIDIA GPU 与华为昇腾 NPU。
3. 核心功能模块清单
基础架构类:common_0_utils.sh (下载、解压、重试);common_1_system.sh (时钟同步、内核调优、磁盘初始化)。
组件运行时类:common_2_runtime.sh (运行时部署);common_3_rpm_deb.sh (跨平台包管理)。
安全维护类:CVE_2016_2183_kcs.sh (TLS 加固、安全漏洞修复);自动备份 (etcd 与 K8s 配置定期快照)。
4. 典型启动示例
./kuberun.sh \
--fileServer http://10.142.113.88 \
--imageServerIP 10.123.123.123 \
--imageRepo registry.paas,registry.paas \
--role deploy-masters \
--kubeVersion v1.21.5 \
--clusterRuntime containerd \
--clusterNetworkMode calico
0x04 手动把主机加入到KCS集群
注意: 本章节记录的是将一台普通 ECS 云主机强行“空降”接入移动云 KCS 容器集群的操作。虽然该节点在 Kubernetes 集群内部(kubectl 层级)运行正常,且不影响原有 K8S 集群功能,但在移动云控制台(KCS 管理页面)会存在以下已知限制:
信息显示不全:控制台可能无法准确读取该节点的规格、计费方式、实时资源监控等元数据。
生命周期脱钩:无法通过控制台对该节点进行重启、重装、扩容数据盘或一键删除等 KCS 原生运维操作。
组件维护缺失:该节点上的 kubelet 和 containerd 不受 KCS 自动补丁和升级策略管理,需手动维护。
云插件兼容性:若涉及云硬盘(EBS)挂载或负载均衡(ELB)自动绑定,可能需要手动配置对应的 providerID 或相关 Label 映射。
只做学习交流使用,风险自负!!!
官方脚本加入worker 节点解析
研究了官方脚本后,证实了确实使用 kubeadm join 来加入 worker 节点。官方脚本关键代码如下:
# 从 kubernetes.sh 第 177-198 行:
### 部署node
function public_main_init_node() {
log_info "Start deploy the work node."
### join
log_info "cmd_kubeadm_join"
utils_retry_5_run cmd_kubeadm_join
log_info "CVE_2016_2183_kcs[Work node]"
CVE_2016_2183_kcs
gpu_config_node
npu_config
log_info "Work node has finished!"
deploy_success_flag
}
# 第 25-32 行的 join 函数:
function cmd_kubeadm_join() {
kubeadm join --config /etc/kubeadm/kubeadm.cfg -v=10 \
--ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests
}
# 所以官方流程是:
# 1. ConfigDrive 注入 → /etc/kubeadm/kubeadm.cfg (包含 join 配置)
# 2. cloud-init 执行 → kuberun.sh --role deploy-work-nodes
# 3. 执行 kubeadm join → 使用预注入的配置文件加入集群
# kubeadm.cfg 内容(worker 节点)类似:
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
discovery:
bootstrapToken:
token: <token>
unsafeSkipCAVerification: true
timeout: 5m0s
nodeRegistration:
name: <node-name>
criSocket: unix:///run/containerd/containerd.sock
kubeletExtraArgs:
cloud-provider: "external"
订购一台新的云主机
为了测试,在同节点可用区,手动订购了一台ECS云主机。
硬盘大小跟其他集群其他Work节点一样,都是系统盘100G,数据盘100G挂载给/var/lib/paascontainer/。
新订购的Worker节点需要满足:
- 网络互通 - 能访问Master节点的6443端口
- 基础环境 - 已安装containerd、kubelet、kubeadm
- 配置正确 - kubelet版本与集群兼容
初始化分区
# 把sdb也格式化成xfs
root@k8s-test-gpu:~# sudo mkfs.xfs /dev/sdb -f
meta-data=/dev/sdb isize=512 agcount=4, agsize=6553600 blks
realtime =none extsz=4096 blocks=0, rtextents=0
Discarding blocks...Done.
# 查看UUID
root@k8s-test-gpu:~# lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
sda
└─sda1 ext4 1.0 9cb9e9a5-e5a1-46e3-9ead-2330073ed95a 77.5G 17% /
sdb xfs f4a0c79c-f46e-4912-9bf9-e19e90f3d418
sr0 iso9660 Joliet Extension config-2 2026-03-26-14-28-31-00
# 创建挂载文件夹
root@k8s-test-gpu:~# mkdir -p /var/lib/paascontainer
# 设置自动挂载
root@k8s-test-gpu:~# echo "UUID=f4a0c79c-f46e-4912-9bf9-e19e90f3d418 /var/lib/paascontainer/ xfs defaults 0 0" >> /etc/fstab
# 查看自动挂载
root@k8s-test-gpu:~# cat /etc/fstab
/dev/disk/by-uuid/9cb9e9a5-e5a1-46e3-9ead-2330073ed95a / ext4 defaults 0 1
/swap.img none swap sw 0 0
UUID=f4a0c79c-f46e-4912-9bf9-e19e90f3d418 /var/lib/paascontainer/ xfs defaults 0 0
# 尝试挂载
root@k8s-test-gpu:~# mount -a
# 查看挂载
root@k8s-test-gpu:~# df -h | grep sd
/dev/sda1 99G 19G 76G 20% /
/dev/sdb 100G 2.0G 98G 2% /var/lib/paascontainer
基础环境准备
# 关闭 swap
sudo swapoff -a
sudo sed -i '/swap/s/^/#/' /etc/fstab
# 加载内核模块
modprobe br_netfilter
modprobe overlay
# 配置 sysctl
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
EOF
sysctl --system
# 配置 hosts
echo "127.0.0.1 $(hostname)" >> /etc/hosts
# 创建跟KCS一样的目录结构
mkdir -p /var/lib/paascontainer/{kubelet,containerd,docker,etcd} && \
ln -s /var/lib/paascontainer/containerd /var/lib/containerd && \
ln -s /var/lib/paascontainer/docker /var/lib/docker && \
echo "目录结构配置完成!"
安装containerd
由于集群目前使用的containerd版本是1.6.28,为了兼容性,我这边选择使用使用二进制包安装。
# 下载完整包(额外包含 critctl - containerd CLI 工具,CRI 插件配置,systemd service 文件)
root@k8s-test-gpu:~# wget https://github.com/containerd/containerd/releases/download/v1.6.28/cri-containerd-1.6.28-linux-amd64.tar.gz
# 安装(直接解压到根目录,文件会自动安装到正确位置)
root@k8s-test-gpu:~# tar -xzf cri-containerd-1.6.28-linux-amd64.tar.gz -C /
# 复制集群配置文件
mkdir -p /etc/containerd/certs.d/cis-hub-huadong-7.cmecloud.cn
scp root@<WorkIP>:/etc/containerd/config.toml /etc/containerd/config.toml
scp root@<WorkIP>:/etc/containerd/certs.d/cis-hub-huadong-7.cmecloud.cn/hosts.toml /etc/containerd/certs.d/cis-hub-huadong-7.cmecloud.cn/hosts.toml
# 启动 containerd
systemctl daemon-reload
systemctl enable containerd --now
systemctl status containerd
安装Kubernetes 组件
集群使用的kubeadm,kubectl,kubelet的版本都是v1.29.5-eki.4.1.0,还是为了兼容性,新机器,直接用集群内的这些组件,直接从老机器这边复制过去。
[root@kcs-k8s-test-m-tp88s ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"29+", GitVersion:"", GitCommit:"676da85a36624f8563ee60894752f937c2f725ce", GitTreeState:"clean", BuildDate:"2024-10-15T09:39:26Z", GoVersion:"go1.21.9", Compiler:"gc", Platform:"linux/amd64"}
[root@kcs-k8s-test-m-tp88s ~]# kubelet --version
Kubernetes v1.29.5-eki.4.1.0
[root@kcs-k8s-test-m-tp88s ~]# kubectl version
Client Version: v1.29.5-eki.4.1.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.5-eki.4.1.0
从老机器复制所有组件
# 复制二进制文件
scp root@<WorkIP>:/usr/bin/kubeadm /usr/bin/
scp root@<WorkIP>:/usr/bin/kubelet /usr/bin/
scp root@<WorkIP>:/usr/bin/kubectl /usr/bin/
scp root@<WorkIP>:/usr/lib/systemd/system/kubelet.service /usr/lib/systemd/system/
chmod +x /usr/bin/kubeadm /usr/bin/kubelet /usr/bin/kubectl
# 验证版本
kubeadm version
配置kubelet
配置 kubelet
# 创建必要目录
mkdir -p /etc/kubernetes/pki
mkdir -p /var/lib/kubelet/pki
mkdir -p /etc/kubernetes/nginx-proxy
mkdir -p /etc/kubernetes/manifests
# 复制 CA 证书 (请替换 <MasterIP>)
scp root@<MasterIP>:/etc/kubernetes/pki/ca.crt /etc/kubernetes/pki/ca.crt
# 复制 kubelet 配置
scp root@<WorkIP>:/var/lib/kubelet/config.yaml /var/lib/kubelet/config.yaml
# 修改 kubeadm-flags.env
HOSTNAME=$(hostname)
IP=$(hostname -I | awk '{print $1}')
cat > /var/lib/kubelet/kubeadm-flags.env <<EOF
KUBELET_KUBEADM_ARGS="--cloud-provider=external --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=${HOSTNAME} --max-pods=128 --node-ip=${IP} --node-labels=machine.ecloud.cmss.com/node-type=center,machine.ecloud.cmss.com/machine-region=N0574-ZJ-NBZD01,node.kubernetes.io/cloud=,machine.ecloud.cmss.com/machine-name=${HOSTNAME},machine.ecloud.cmss.com/machine-type=VM,machine.ecloud.cmss.com/specsname=gpu.2xlarge.2 --pod-infra-container-image=cis-hub-huadong-7.cmecloud.cn/ecloud/pause:3.9"
EOF
配置nginx-proxy
Worker 节点上的 nginx-proxy 作用是:本地监听 6443 端口,代理到 3 个 Master 节点,实现: 高可用:如果一个 Master 挂了,自动切换负载均衡:分散请求到 3 个 Master。
但是我看其他自动创建的work节点都是这样配置的,我也照搬过来。
# 创建 nginx 配置
cat > /etc/kubernetes/nginx-proxy/nginx.conf <<'EOF'
error_log stderr notice;
worker_processes 1;
events {
multi_accept on;
use epoll;
worker_connections 1024;
}
stream {
upstream kube_apiserver {
least_conn;
server 192.168.11.139:6443;
server 192.168.11.38:6443;
server 192.168.11.246:6443;
}
server {
listen 6443;
proxy_pass kube_apiserver;
proxy_timeout 10m;
proxy_connect_timeout 10s;
}
}
EOF
# 创建静态 Pod manifest
cat > /etc/kubernetes/manifests/kube-nginx.yaml <<'EOF'
apiVersion: v1
kind: Pod
metadata:
labels:
app: kube-nginx
name: nginx-proxy
namespace: kube-system
spec:
containers:
- image: cis-hub-huadong-7.cmecloud.cn/ecloud/nginx:kcs-proxy
imagePullPolicy: Always
name: nginx-proxy
resources:
limits:
cpu: 300m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/nginx
name: etc-nginx
readOnly: true
hostNetwork: true
priorityClassName: system-node-critical
volumes:
- hostPath:
path: /etc/kubernetes/nginx-proxy
name: etc-nginx
status: {}
EOF
你也可以直接连接 Master(更简单),kubelet.conf 直接指向 Master 节点, server: https://192.168.11.139:6443。 直接配置 kubelet.conf 指向 Master 但是不推荐这个做法。因为就没高可用了。
# 获取当前主机名
HOSTNAME=$(hostname)
# 写入 kubelet.conf
cat > /etc/kubernetes/kubelet.conf <<EOF
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: <这里填入你的base64编码CA证书内容>
server: https://192.168.11.139:6443
name: k8s-test
contexts:
- context:
cluster: k8s-test
user: system:node:${HOSTNAME}
name: system:node:${HOSTNAME}@k8s-test
current-context: system:node:${HOSTNAME}@k8s-test
users:
- name: system:node:${HOSTNAME}
user:
client-certificate: /var/lib/kubelet/pki/kubelet.crt
client-key: /var/lib/kubelet/pki/kubelet.key
EOF
配置 kubelet systemd
# 创建 drop-in 配置目录
mkdir -p /etc/systemd/system/kubelet.service.d
# 写入 10-kubeadm.conf
cat > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf <<'EOF'
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
EOF
# 重载 systemd 配置
systemctl daemon-reload
获取 Join Token 并加入集群
在master节点上运行
# 生成新的 join 命令
kubeadm token create --print-join-command
然后在新节点上执行join命令
# 停止 kubelet 并清理环境
systemctl stop kubelet
rm -f /etc/kubernetes/kubelet.conf
rm -rf /var/lib/kubelet/pki/*
# 执行 join 命令(请根据实际生成的 token 和 hash 替换)
# 建议加上 --v=5 方便排查问题
kubeadm join apiserver.cluster.local:6443 \
--token <your-token> \
--discovery-token-ca-cert-hash sha256:<your-hash> \
--ignore-preflight-errors=alldocker k8s openstack ecloud kcs