发布时间: 2024-1-23 文章作者: myluzh 分类名称: Kubernetes 朗读文章
Rook 是一款云原生存储编排服务工具,Ceph 是一种广泛使用的开源分布式存储方案,通过 Rook 可以大大简化 Ceph 在 Kubernetes 集群中的部署和维护工作。 Rook 由云原生计算基金会( CNCF )孵化,且于2020 年 10 月正式进入毕业阶段。Rook 并不直接提供数据存储方案,而是集成了各种存储解决方案,并提供一种自管理、自扩容、自修复的云原生存储服务。社区官方资料显示, Rook 目前最新的稳定版本中,只有 Rook +Ceph 存储集成方案处于 stable 状态,版本升级较平滑。 Ceph 是一种广泛使用的开源分布式存储方案,通过 Rook 则可以大大简化 Ceph 在 Kubernetes 集群中的部署和维护工作。基于 Rook+Ceph 的存储方案,能为云原生环境提供文件、块及对象存储服务。 建立Ceph集群不仅需要大量的服务器资源,本身的复杂度也需要人力成本。但是在企业内部使用Kubemetes时,无论是在部署业务服务还是中间件服务,都能很明显地体会到Kubernetes的便利性和强大之处,所以我们在企业内部使用Kubernetes时,同时也会把中间件服务(比如MySQL、RabbitMQ、Zookeeper等)部署在Kubernetes集群中。 相对于生产环境,也需要保留它们的数据,但是非生产环境可能并没有现成的,存储平台供Kubernetes使用,新建一个平台供其使用也会浪费很多精力。为了解决非生产环境的数据持久化问题,很多公司都采用了NFS作为后端,但是一旦使用的容器较多或者存储的数据量比较大,就容易造成性能问题,并且NFS服务器一旦出现问题,整个数据可能会全部丢失,所以在非生产环境也需要一个高可用、分布式并且免运维的存储平台,于是Rook就诞生了。 Rook是一个自我管理的分布式存储编排系统,它本身并不是存储系统,Rook在存储和Kubernetes之间搭建了一个桥梁,使存储系统的搭建或者维护变得特别简单,Rook将分布式存储系统转变为自我管理、自我扩展、自我修复的存储服务。它让一些存储的操作比如部署、配置、扩容、升级、迁移、灾难恢复、监视和资源管理变得自动化,无须人工处理. 同时Rook支持CSI,可以利用CSI做一些PVC的,快照、扩容、克隆等操作。 有了Rook就可以快速地搭建一个Ceph存储系统,用来持久化一些必需的数据,不仅降低了运维复杂度,也能更加方便地体验Kubernetes带来的收益,同时也可以通过Rook来演示更多的Kubernetes存储的高级功能。0x01 Rook的安装
[root@k8s-master01 ~]# lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT sda ├─sda1 xfs 044b1541-8e8e-4125-841d-6f6924560626 /boot └─sda2 LVM2_member 6lY06E-Vy3I-Zh0n-E2b0-tpCo-xhZa-Tfo7oc ├─centos-root xfs cd294525-be55-48fd-8e57-3aa02042c071 / └─centos-swap swap 6a621974-19d7-4d23-a7f1-91cc8eaaa169 sdb # 如上所示,sdb可以作为ceph的数据盘。2、保证所有安装ceph的节点都安装了lvm2
yum -y install lvm2 [root@k8s-master ~]# yum list installed | grep lvm2 lvm2.x86_64 7:2.02.187-6.el7_9.5 @updates lvm2-libs.x86_64 7:2.02.187-6.el7_9.5 @updates3、加载rbd内核
[root@k8s-master ~]# modprobe rbd [root@k8s-master ~]# lsmod | grep rbd rbd 118784 0 libceph 483328 1 rbd4、取消master污点
[root@k8s-master ~]# kubectl get no -o yaml | grep taint -A 5 taints: - effect: NoSchedule key: node-role.kubernetes.io/master status: addresses: - address: 192.168.10.101(2)取消master节点的污点
# 此命令能够取消所有设备的污点 [root@k8s-master ~]# kubectl taint nodes --all node-role.kubernetes.io/master-5、下载v1.11.5版本的Rook源码:
# 加速链接 https://dl.itho.cn/k8s/rook/v1.11.5.zip [root@k8s-master01 rook]# wget https://github.com/rook/rook/archive/refs/tags/v1.11.5.zip [root@k8s-master01 rook]# unzip v1.11.5.zip备注:v1.11.5的版本可以支持kubernetes:v1.21.0以上的版本,本案例已经下载过,直接使用离线文件
[root@k8s-master01 rook]# cd rook-1.11.5/deploy/examples/ # 查看部署所需要的镜像提前拉取 [root@k8s-master01 examples]# cat images.txt quay.io/ceph/ceph:v17.2.6 quay.io/cephcsi/cephcsi:v3.8.0 quay.io/csiaddons/k8s-sidecar:v0.5.0 registry.k8s.io/sig-storage/csi-attacher:v4.1.0 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.7.0 registry.k8s.io/sig-storage/csi-provisioner:v3.4.0 registry.k8s.io/sig-storage/csi-resizer:v1.7.0 registry.k8s.io/sig-storage/csi-snapshotter:v6.2.1 rook/ceph:v1.11.5(2)用docker pull拉取上边的所有镜像
[root@k8s-master examples]#docker pull xxxx(3)如果无法拉取,则采用如下方法拉取 所有节点都拉取
[root@k8s-master examples]# vim ceph-images.sh #!/bash/bin image_list=( csi-node-driver-registrar:v2.7.0 csi-attacher:v4.1.0 csi-snapshotter:v6.2.1 csi-resizer:v1.7.0 csi-provisioner:v3.4.0 ) registry_mirror="registry.aliyuncs.com/it00021hot" google_gcr="registry.k8s.io/sig-storage" for image in ${image_list[*]} do docker image pull ${registry_mirror}/${image} docker image tag ${registry_mirror}/${image} ${google_gcr}/${image} docker image rm ${registry_mirror}/${image} echo "${registry_mirror}/${image} ${google_gcr}/${image} downloaded." done docker pull quay.io/ceph/ceph:v17.2.6 docker pull quay.io/cephcsi/cephcsi:v3.8.0 docker pull quay.io/csiaddons/k8s-sidecar:v0.5.0 docker pull rook/ceph:v1.11.5 echo all ok #执行脚本进行拉取 [root@k8s-master examples]# bash ceph-images.sh7、部署rook operator
[root@k8s-master examples]# kubectl create -f crds.yaml -f common.yaml -f operator.yaml [root@k8s-master examples]# kubectl -n rook-ceph get pod NAME READY STATUS RESTARTS AGE rook-ceph-operator-6c54c49f5f-swn5h 1/1 Running 0 5m20s0x02 部署ceph集群
[root@k8s-master examples]# kubectl create -f cluster.yamlcluster.yaml 可以自定义节点跟磁盘设备
[root@k8s-master examples]# vi cluster.yaml # 修改下面部分 useAllNodes: false # 关闭使用所有Node useAllDevices: false # 关闭使用所有设备 # 自定义 nodes: - name: "172.16.10.20" devices: - name: "sdb" - name: "172.16.10.21" devices: - name: "sdb" - name: "172.16.10.22" devices: - name: "sdb"2、部署ceph工具
[root@k8s-master ~]# cd rook/deploy/examples/ [root@k8s-master examples]# kubectl apply -f toolbox.yaml(2)查看pod
[root@k8s-master examples]# kubectl get pod -n rook-ceph | grep tools # 会多出一个名为rook-ceph-tools的pod NAME READY STATUS RESTARTS AGE rook-ceph-tools-598b59df89-gjfvz 1/1 Running 0 29s(3)登录rook-ceph-tools
# kubectl exec -it rook-ceph-tools-598b59df89-gjfvz -n rook-ceph -- bash(4)查看ceph集群状态
bash-4.4$ ceph -s cluster: id: 7cd53c0e-4691-4550-8c7a-b2ce7ded9bfe health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 13m) mgr: a(active, since 12m), standbys: b osd: 3 osds: 3 up (since 13m), 3 in (since 13m) data: pools: 1 pools, 1 pgs objects: 2 objects, 449 KiB usage: 63 MiB used, 180 GiB / 180 GiB avail pgs: 1 active+clean(5)查看osd目录树
bash-4.4$ ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.17578 root default -7 0.05859 host k8s-master 2 hdd 0.05859 osd.2 up 1.00000 1.00000 -3 0.05859 host k8s-node01 0 hdd 0.05859 osd.0 up 1.00000 1.00000 -5 0.05859 host k8s-node02 1 hdd 0.05859 osd.1 up 1.00000 1.00000(6)查看osd存储状态
bash-4.4$ ceph osd status ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE 0 k8s-node01 21.0M 59.9G 0 0 0 0 exists,up 1 k8s-node02 21.0M 59.9G 0 0 0 0 exists,up 2 k8s-master 20.6M 59.9G 0 0 0 0 exists,up(7)列出osd存储池
bash-4.4$ ceph osd pool ls .mgr0x03 安装snapshot控制器
[root@k8s-master ~]# git clone https://github.com/dotbalo/k8s-ha-install.git [root@k8s-master ~]# cd k8s-ha-install/ [root@k8s-master k8s-ha-install]# git checkout manual-installation-v1.20.x2、部署
[root@k8s-master k8s-ha-install]# kubectl create -f snapshotter/ -n kube-system [root@k8s-master k8s-ha-install]# kubectl get pod -n kube-system -l app=snapshot-controller NAME READY STATUS RESTARTS AGE snapshot-controller-0 1/1 Running 0 4m11s0x04 部署ceph-dashboard
[root@k8s-master ~]# cd rook/deploy/examples/ [root@k8s-master examples]# kubectl create -f dashboard-external-https.yaml2、查看svc
[root@k8s-master examples]# kubectl get svc -n rook-ceph NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rook-ceph-mgr ClusterIP 10.107.221.122 <none> 9283/TCP 43m rook-ceph-mgr-dashboard ClusterIP 10.101.196.95 <none> 8443/TCP 43m rook-ceph-mgr-dashboard-external-https NodePort 10.111.32.176 <none> 8443:31540/TCP 14s rook-ceph-mon-a ClusterIP 10.102.236.97 <none> 6789/TCP,3300/TCP 52m rook-ceph-mon-b ClusterIP 10.107.235.178 <none> 6789/TCP,3300/TCP 51m rook-ceph-mon-c ClusterIP 10.98.200.63 <none> 6789/TCP,3300/TCP 49m3、删除原有的svc
[root@k8s-master examples]# kubectl delete svc/rook-ceph-mgr-dashboard -n rook-ceph4、获取ceph-dashboard的登录密码
[root@k8s-master examples]# kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}"|base64 --decode && echo lVSYe1`Ne*6%Mzf@;dzf # 账号:admin 密码:lVSYe1`Ne*6%Mzf@;dzf5、登录ceph-dashboard
[root@k8s-master examples]# kubectl get svc -n rook-ceph NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rook-ceph-mgr ClusterIP 10.107.221.122 <none> 9283/TCP 43m rook-ceph-mgr-dashboard-external-https NodePort 10.111.32.176 <none> 8443:31540/TCP 14s rook-ceph-mon-a ClusterIP 10.102.236.97 <none> 6789/TCP,3300/TCP 52m rook-ceph-mon-b ClusterIP 10.107.235.178 <none> 6789/TCP,3300/TCP 51m rook-ceph-mon-c ClusterIP 10.98.200.63 <none> 6789/TCP,3300/TCP 49m #访问地址 https://192.168.10.101:31540/0x05 ceph块存储的使用
[root@k8s-master01 ~]# cd /root/rook/deploy/examples/csi/rbd [root@k8s-master01 rbd]# vim storageclass.yaml replicated: size: 22、创建StorageClass和存储池
[root@k8s-master01 rbd]# kubectl create -f storageclass.yaml -n rook-ceph cephblockpool.ceph.rook.io/replicapool created storageclass.storage.k8s.io/rook-ceph-block created3、查看创建结果
[root@k8s-master01 rbd]# kubectl get cephblockpool -n rook-ceph NAME PHASE replicapool Ready [root@k8s-master01 rbd]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate true 68s注意:这里创建的StorageClass的名字为rook-ceph-block,在创建PVC的时候指定这个名字,即可让PVC和这个存储关联起来。
[root@k8s-master ~]# cd /root/rook/deploy/examples # rook自带了一个MySQL的测试样例,如下所示 [root@k8s-master examples]# vim mysql.yaml apiVersion: v1 kind: Service metadata: name: wordpress-mysql labels: app: wordpress spec: ports: - port: 3306 selector: app: wordpress tier: mysql clusterIP: None --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim ## PVC的名字 labels: app: wordpress spec: storageClassName: rook-ceph-block ## 此处要的名字要关联前面创建的StorageClass accessModes: - ReadWriteOnce resources: requests: storage: 10Gi ## 实验环境设置的可以小一点 --- apiVersion: apps/v1 kind: Deployment metadata: name: wordpress-mysql labels: app: wordpress tier: mysql spec: selector: matchLabels: app: wordpress tier: mysql strategy: type: Recreate template: metadata: labels: app: wordpress tier: mysql spec: containers: - image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD value: changeme ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim ## 对应前面的PVC名字备注: 在这个文件中有一段PVC的配置,该PVC会连接刚才创建的StorageClass,然后动态创建PV供Pod使用,之后如果有其他的存储需求,只需要创建PVC指定storageClassName为刚才创建的StorageClass名称即可连接到Rook的Ceph。如果是StatefulSet,只需要将volumeTemplateClaim里面的Claim名称改为StorageClass名称即可动态地为每个Pod创建一个单独的PV。
# 创建测试 [root@k8s-master examples]# kubectl create -f mysql.yaml(2)查看创建的PVC跟PV
[root@k8s-master examples]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pv-claim Bound pvc-d8ae6350-186a-4deb-b301-ed7a7a71e9b8 20Gi RWO rook-ceph-block 12m [root@k8s-master examples]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-d8ae6350-186a-4deb-b301-ed7a7a71e9b8 10Gi RWO Delete Bound default/mysql-pv-claim rook-ceph-block 12m(3)查看pod中的卷
[root@k8s-master ~]# kubectl get pod NAME READY STATUS RESTARTS AGE wordpress-mysql-79966d6c5b-htnbl 1/1 Running 0 12m [root@k8s-master ~]# kubectl exec -it wordpress-mysql-79966d6c5b-htnbl -- df -Th Filesystem Type Size Used Avail Use% Mounted on overlay overlay 194G 18G 177G 10% / tmpfs tmpfs 64M 0 64M 0% /dev tmpfs tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/mapper/centos-root xfs 194G 18G 177G 10% /etc/hosts shm tmpfs 64M 0 64M 0% /dev/shm /dev/rbd0 ext4 9.8G 116M 9.7G 2% /var/lib/mysql tmpfs tmpfs 3.7G 12K 3.7G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs tmpfs 1.9G 0 1.9G 0% /proc/acpi tmpfs tmpfs 1.9G 0 1.9G 0% /proc/scsi tmpfs tmpfs 1.9G 0 1.9G 0% /sys/firmware5、StatfulSet volumeClaimTemplates
# volumeClaimTemplates没有提供样例文件,需要自己编辑 [root@k8s-master examples]# vim volumeClaimTemplates.yaml apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx # has to match .spec.template.metadata.labels serviceName: "nginx" replicas: 3 # by default is 1 template: metadata: labels: app: nginx # has to match .spec.selector.matchLabels spec: terminationGracePeriodSeconds: 10 containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "rook-ceph-block" resources: requests: storage: 1Gi备注:
[root@k8s-master examples]# kubectl create -f volumeClaimTemplates.yaml [root@k8s-master examples]# kubectl get pod -l app=nginx NAME READY STATUS RESTARTS AGE web-0 1/1 Running 0 38s web-1 1/1 Running 0 29s web-2 1/1 Running 0 9s [root@k8s-master examples]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pv-claim Bound pvc-d8ae6350-186a-4deb-b301-ed7a7a71e9b8 20Gi RWO rook-ceph-block 53m www-web-0 Bound pvc-57c8626d-a370-4345-abf3-d212465b5635 1Gi RWO rook-ceph-block 2m14s www-web-1 Bound pvc-4ea4b53f-acab-4731-aa7d-bb4470c35c42 1Gi RWO rook-ceph-block 53s www-web-2 Bound pvc-a1037092-dfaa-47bd-b7f9-8f7be2248611 1Gi RWO rook-ceph-block 33s此时,3个Pod分别有了自己的存储数据互不共享.在使用StatefulSet建立Redis、MySQL、RabbitMQ集群时,如果需要持久化数据,就需要使用volumeClaimTemplates参数为每个Pod提供存储。
[root@k8s-master ~]# cd /root/rook/deploy/examples [root@k8s-master examples]# kubectl create -f filesystem.yaml ...(2)查看pod启动状态
[root@k8s-master examples]# kubectl get pod -n rook-ceph -l app=rook-ceph-mds NAME READY STATUS RESTARTS AGE rook-ceph-mds-myfs-a-d4bfd947f-t8cjc 2/2 Running 0 6m26s rook-ceph-mds-myfs-b-85798855d6-sjx5d 2/2 Running 0 6m25s(3)在Ceph dashboard查看
[root@k8s-master ~]# cd /root/rook/deploy/examples/csi/cephfs [root@k8s-master cephfs]# kubectl create -f storageclass.yaml之后将PVC的storageClassName设置成rook-cephfs即可创建共享文件类型的存储(指向块存储的StorageClass即为创建块存储),可以供多个Pod共享数据。
[root@k8s-master ~]# cd /root/rook/deploy/examples/csi/cephfs # rook提供了此测试样例 [root@k8s-master cephfs]# cat kube-registry.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc namespace: kube-system spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: rook-cephfs --- apiVersion: apps/v1 kind: Deployment metadata: name: kube-registry namespace: kube-system labels: k8s-app: kube-registry kubernetes.io/cluster-service: "true" spec: replicas: 3 selector: matchLabels: k8s-app: kube-registry template: metadata: labels: k8s-app: kube-registry kubernetes.io/cluster-service: "true" spec: containers: - name: registry image: registry:2 imagePullPolicy: Always resources: limits: cpu: 100m memory: 100Mi env: # Configuration reference: https://docs.docker.com/registry/configuration/ - name: REGISTRY_HTTP_ADDR value: :5000 - name: REGISTRY_HTTP_SECRET value: "Ple4seCh4ngeThisN0tAVerySecretV4lue" - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY value: /var/lib/registry volumeMounts: - name: image-store mountPath: /var/lib/registry ports: - containerPort: 5000 name: registry protocol: TCP livenessProbe: httpGet: path: / port: registry readinessProbe: httpGet: path: / port: registry volumes: - name: image-store persistentVolumeClaim: claimName: cephfs-pvc readOnly: false [root@k8s-master cephfs]# kubectl create -f kube-registry.yaml [root@k8s-master cephfs]# kubectl get pod -n kube-system -l k8s-app=kube-registry NAME READY STATUS RESTARTS AGE kube-registry-5b677b6c87-fxltv 1/1 Running 0 2m30s kube-registry-5b677b6c87-lhkfr 1/1 Running 0 2m30s kube-registry-5b677b6c87-sbzs4 1/1 Running 0 2m30s [root@k8s-master cephfs]# kubectl get pvc -n kube-system NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc Bound pvc-d025b469-3d30-4a45-b7bb-e62ab8bf21e8 1Gi RWX rook-cephfs 2m56s此时一共创建了3个Pod,这3个Pod共用一个存储,并都挂载到了/var/lib/registry,该目录中的数据由3个容器共享。
[root@k8s-master ~]# kubectl get pvc | grep mysql-pv NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pv-claim Bound pvc-d8ae6350-186a-4deb-b301-ed7a7a71e9b8 10Gi RWO rook-ceph-block 78m(2)扩容
[root@k8s-master ~]# kubectl edit pvc mysql-pv-claim apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" volume.beta.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com creationTimestamp: "2023-07-23T00:52:41Z" finalizers: - kubernetes.io/pvc-protection labels: app: wordpress name: mysql-pv-claim namespace: default resourceVersion: "34451" uid: d8ae6350-186a-4deb-b301-ed7a7a71e9b8 spec: accessModes: - ReadWriteOnce resources: requests: storage: 15Gi ## 这里修改为15G,原来是10G,总共分配的大小 storageClassName: rook-ceph-block volumeMode: Filesystem volumeName: pvc-d8ae6350-186a-4deb-b301-ed7a7a71e9b8 status: accessModes: - ReadWriteOnce capacity: storage:15Gi ## 这里修改为15G,原来是10G,允许访问的大小 phase: Bound(3)查看PVC修改结果
# 扩容有一定的延迟,等待一小会,即可看到扩容结果。 [root@k8s-master ~]# kubectl get pvc | grep mysql NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pv-claim Bound pvc-d8ae6350-186a-4deb-b301-ed7a7a71e9b8 15Gi RWO rook-ceph-block 89m(4)查看PV扩容结果
[root@k8s-master ~]# kubectl get pv | grep mysql NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-d8ae6350-186a-4deb-b301-ed7a7a71e9b8 15Gi RWO Delete Bound default/mysql-pv-claim rook-ceph-block 90m(5)查看容器中的扩容结果
[root@k8s-master ~]# kubectl exec -it wordpress-mysql-79966d6c5b-wltpq -- df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/rbd0 ext4 15G 116M 30G 1% /var/lib/mysql2、扩容文件共享型PVC
[root@k8s-master ~]# kubectl get pvc -n kube-system NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc Bound pvc-d025b469-3d30-4a45-b7bb-e62ab8bf21e8 1Gi RWX rook-cephfs 25m(2)修改大小
# 之前是1G,此处改为2G [root@k8s-master ~]# kubectl edit pvc cephfs-pvc -n kube-system apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" volume.beta.kubernetes.io/storage-provisioner: rook-ceph.cephfs.csi.ceph.com volume.kubernetes.io/storage-provisioner: rook-ceph.cephfs.csi.ceph.com creationTimestamp: "2023-07-23T02:04:49Z" finalizers: - kubernetes.io/pvc-protection name: cephfs-pvc namespace: kube-system resourceVersion: "42320" uid: d025b469-3d30-4a45-b7bb-e62ab8bf21e8 spec: accessModes: - ReadWriteMany resources: requests: storage: 2Gi ## 提供的大小 storageClassName: rook-cephfs volumeMode: Filesystem volumeName: pvc-d025b469-3d30-4a45-b7bb-e62ab8bf21e8 status: accessModes: - ReadWriteMany capacity: storage: 2Gi ## 允许访问的大小 phase: Bound(3)查看修改后的PVC大小
[root@k8s-master ~]# kubectl get pvc -n kube-system NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc Bound pvc-d025b469-3d30-4a45-b7bb-e62ab8bf21e8 2Gi RWX rook-cephfs 29m(4)查看修改后的PV的大小
[root@k8s-master ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-d025b469-3d30-4a45-b7bb-e62ab8bf21e8 2Gi RWX Delete Bound kube-system/cephfs-pvc rook-cephfs 31m(5)查看容器中的扩容结果
[root@k8s-master ~]# kubectl exec -it kube-registry-5b677b6c87-sbzs4 -n kube-system -- df -Th Filesystem Type Size Used Available Use% Mounted on 10.98.200.63:6789,10.102.236.97:6789,10.107.235.178:6789:/volumes/csi/csi-vol-04cde2de-7b3a-44fb-a41a-785d9c7c13e1/53f34b26-8568-4308-be57-6053276a5f99 ceph 2.0G 0 2.0G 0% /var/lib/registry0x08 PVC快照
[root@k8s-master ~]# cd /root/rook/deploy/examples/csi/rbd [root@k8s-master rbd]# kubectl create -f snapshotclass.yaml2、创建快照
[root@k8s-master rbd]# kubectl get pod NAME READY STATUS RESTARTS AGE web-0 1/1 Running 0 95m web-1 1/1 Running 0 94m web-2 1/1 Running 0 94m wordpress-mysql-79966d6c5b-wltpq 1/1 Running 0 148m(2)登录MySQL容器创建测试用文件
[root@k8s-master rbd]# kubectl exec -it wordpress-mysql-79966d6c5b-wltpq -- bash root@wordpress-mysql-79966d6c5b-wltpq:/# ls bin boot dev docker-entrypoint-initdb.d entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var root@wordpress-mysql-79966d6c5b-wltpq:/# cd /var/lib/mysql root@wordpress-mysql-79966d6c5b-wltpq:/var/lib/mysql# mkdir test_snapshot root@wordpress-mysql-79966d6c5b-wltpq:/var/lib/mysql# cd test_snapshot/ root@wordpress-mysql-79966d6c5b-wltpq:/var/lib/mysql/test_snapshot# echo "test for snapshot">test_snapshot01.txt(3)修改snapshot.yaml文件
# 指定为哪个PVC创建快照 [root@k8s-master rbd]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pv-claim Bound pvc-d8ae6350-186a-4deb-b301-ed7a7a71e9b8 15Gi RWO rook-ceph-block 152m www-web-0 Bound pvc-57c8626d-a370-4345-abf3-d212465b5635 1Gi RWO rook-ceph-block 100m www-web-1 Bound pvc-4ea4b53f-acab-4731-aa7d-bb4470c35c42 1Gi RWO rook-ceph-block 99m www-web-2 Bound pvc-a1037092-dfaa-47bd-b7f9-8f7be2248611 1Gi RWO rook-ceph-block 98m [root@k8s-master rbd]# vim snapshot.yaml --- apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: rbd-pvc-snapshot spec: volumeSnapshotClassName: csi-rbdplugin-snapclass source: persistentVolumeClaimName: mysql-pv-claim ##修改为需要做快照的PVC(4)创建快照
[root@k8s-master rbd]# kubectl create -f snapshot.yaml(5)查看快照结果
[root@k8s-master rbd]# kubectl get volumesnapshot NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE rbd-pvc-snapshot true mysql-pv-claim 15Gi csi-rbdplugin-snapclass snapcontent-1f6a8a49-1f78-4def-a998-f4353a964fb3 18s 18s3、恢复快照
[root@k8s-master rbd]# vim pvc-restore.yaml --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc-restore spec: storageClassName: rook-ceph-block dataSource: name: rbd-pvc-snapshot ##快照名称 kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteOnce resources: requests: storage: 20Gi备注:
[root@k8s-master rbd]# kubectl create -f pvc-restore.yaml(3)查看结果
[root@k8s-master rbd]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pv-claim Bound pvc-6dd87050-6019-4775-bfe3-7ad4ce54f2e0 15Gi RWO rook-ceph-block 40m rbd-pvc-restore Bound pvc-e3fc4194-ee54-4682-954e-496d5d4c92f1 20Gi RWO rook-ceph-block 4s www-web-0 Bound pvc-2a48771c-32f0-4702-b44d-f3e89832aaab 1Gi RWO rook-ceph-block 37m www-web-1 Bound pvc-260ba446-9525-4245-aa54-aca7ac56bb8d 1Gi RWO rook-ceph-block 36m www-web-2 Bound pvc-d7d1ba74-0236-4cf3-b7eb-3b4d23e931dc 1Gi RWO rook-ceph-block 36m4、检查快照的数据
# 创建一个容器,挂载用快照恢复的PVC,并查看里面我们创建的文件, [root@k8s-master rbd]# vim restore-check-snapshot-rbd.yaml apiVersion: apps/v1 kind: Deployment metadata: name: check-snapshot-restore spec: selector: matchLabels: app: check strategy: type: Recreate template: metadata: labels: app: check spec: containers: - image: alpine:3.8 name: check command: - sh - -c - sleep 36000 volumeMounts: - name: check-mysql-persistent-storage mountPath: /mnt volumes: - name: check-mysql-persistent-storage persistentVolumeClaim: claimName: rbd-pvc-restore [root@k8s-master rbd]# kubectl create -f restore-check-snapshot-rbd.yaml(2)查看结果
[root@k8s-master rbd]# kubectl get pod NAME READY STATUS RESTARTS AGE check-snapshot-restore-6df758bdd6-srjg2 1/1 Running 0 38s web-0 1/1 Running 0 40m web-1 1/1 Running 0 40m web-2 1/1 Running 0 40m wordpress-mysql-79966d6c5b-28kjn 1/1 Running 0 43m [root@k8s-master rbd]# kubectl exec -it check-snapshot-restore-6df758bdd6-srjg2 -- sh / # cat /mnt/test_snapshot/test_snapshot01.txt test for snapshot0x09 PVC克隆
[root@k8s-master rbd]# pwd /root/rook/deploy/examples/csi/rbd [root@k8s-master rbd]# vim pvc-clone.yaml --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc-clone spec: storageClassName: rook-ceph-block dataSource: name: mysql-pv-claim ## 指定要克隆的PVC的名字 kind: PersistentVolumeClaim accessModes: - ReadWriteOnce resources: requests: storage: 15Gi ## 设置克隆大小注意:
[root@k8s-master rbd]# kubectl create -f pvc-clone.yaml [root@k8s-master rbd]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pv-claim Bound pvc-6dd87050-6019-4775-bfe3-7ad4ce54f2e0 15Gi RWO rook-ceph-block 49m rbd-pvc-clone Bound pvc-ba6a9657-394a-42c3-8382-6d2178493649 15Gi RWO rook-ceph-block 37s rbd-pvc-restore Bound pvc-e3fc4194-ee54-4682-954e-496d5d4c92f1 20Gi RWO rook-ceph-block 9m29s www-web-0 Bound pvc-2a48771c-32f0-4702-b44d-f3e89832aaab 1Gi RWO rook-ceph-block 46m www-web-1 Bound pvc-260ba446-9525-4245-aa54-aca7ac56bb8d 1Gi RWO rook-ceph-block 46m www-web-2 Bound pvc-d7d1ba74-0236-4cf3-b7eb-3b4d23e931dc 1Gi RWO rook-ceph-block 46m
发表评论