«

K8S部署minio集群

作者:myluzh 分类: Kubernetes 长度:6319 阅读:36


0x01 部署 MinIO Operator

最新版的operator请参考:https://min.io/docs/minio/kubernetes/upstream/operations/installation.html

kubectl apply -k "github.com/minio/operator?ref=v5.0.18"
kubectl get pods -n minio-operator
kubectl get all -n minio-operator

0x02 部署 MinIO Tenant(租户)

下载并渲染一个基于 Kustomize 的 Kubernetes 示例资源清单,手动修改成你需要的集群配置应用。

kubectl kustomize https://github.com/minio/operator/examples/kustomization/base/ > tenant-base.yaml

也可以直接用我这边的 tenant-base.yaml
servers: 4 表示这个 Pool 使用了 4 个 MinIO Server Pod(即 4 个节点)。volumesPerServer: 4 表示每个 Pod 使用 4 个 PVC(持久卷)。每个 PVC 是 100Gi。MinIO集群推荐至少 16 个磁盘(即 16 个 PVC)是为了满足 MinIO 的分布式特性,尤其是 Erasure Code(纠删码)机制。

apiVersion: v1
kind: Namespace
metadata:
  name: minio-tenant
---
apiVersion: v1
kind: Secret
metadata:
  name: storage-configuration
  namespace: minio-tenant
stringData:
  config.env: |-
    export MINIO_ROOT_USER="minio"
    export MINIO_ROOT_PASSWORD="minio123"
    export MINIO_STORAGE_CLASS_STANDARD="EC:2"
    export MINIO_BROWSER="on"
type: Opaque
---
apiVersion: v1
data:
  CONSOLE_ACCESS_KEY: Y29uc29sZQ==
  CONSOLE_SECRET_KEY: Y29uc29sZTEyMw==
kind: Secret
metadata:
  name: storage-user
  namespace: minio-tenant
type: Opaque
---
apiVersion: minio.min.io/v2
kind: Tenant
metadata:
  annotations:
    prometheus.io/path: /minio/v2/metrics/cluster
    prometheus.io/port: "9000"
    prometheus.io/scrape: "true"
  labels:
    app: minio
  name: myminio
  namespace: minio-tenant
spec:
  certConfig: {}
  configuration:
    name: storage-configuration
  env: []
  externalCaCertSecret: []
  externalCertSecret: []
  externalClientCertSecrets: []
  features:
    bucketDNS: false
    domains: {}
  image: quay.io/minio/minio:RELEASE.2025-04-08T15-41-24Z
  imagePullSecret: {}
  mountPath: /export
  podManagementPolicy: Parallel
  pools:
  - affinity:
      nodeAffinity: {}
      podAffinity: {}
      podAntiAffinity: {}
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsGroup: 1000
      runAsNonRoot: true
      runAsUser: 1000
      seccompProfile:
        type: RuntimeDefault
    name: pool-0
    nodeSelector: {}
    resources: {}
    securityContext:
      fsGroup: 1000
      fsGroupChangePolicy: OnRootMismatch
      runAsGroup: 1000
      runAsNonRoot: true
      runAsUser: 1000
    servers: 4
    tolerations: []
    topologySpreadConstraints: []
    volumeClaimTemplate:
      apiVersion: v1
      kind: persistentvolumeclaims
      metadata: {}
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 100Gi
        #storageClassName: standard
      status: {}
    volumesPerServer: 4
  priorityClassName: ""
  requestAutoCert: true
  serviceAccountName: ""
  serviceMetadata:
    consoleServiceAnnotations: {}
    consoleServiceLabels: {}
    minioServiceAnnotations: {}
    minioServiceLabels: {}
  subPath: ""
  users:
  - name: storage-user

使用prometheus监控minio集群

1、下载mc(minio客户端)

root@k8s-master01:~# curl https://dl.min.io/client/mc/release/linux-amd64/mc -o /usr/bin/mc
root@k8s-master01:~# chmod +x /usr/bin/mc
root@k8s-master01:~# mc --version

2、配置mc访问地址

# 获取minio svc 后面mc连接需要
root@k8s-master01:~# kubectl get svc -n minio-tenant
NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
minio             ClusterIP   10.43.86.31     <none>        443/TCP          4d17h
# 起一个本地别名(myminio),并配置访问地址和认证信息。
root@k8s-master01:~# mc alias set myminio https://10.43.86.31 minio minio123  --insecure
Added `myminio` successfully.

3、使用mc生成prometheus的配置

生成 Prometheus 的配置,用于抓取 MinIO 节点级别(Node-level)指标。

root@k8s-master01:~# mc admin prometheus generate myminio node

生成 Prometheus 的配置,用于抓取 每个 Bucket 的使用情况。

root@k8s-master01:~# mc admin prometheus generate myminio bucket

生成 Prometheus 的配置,用于抓取 MinIO 集群整体资源指标。

root@k8s-master01:~# mc admin prometheus generate myminio resource

4、prometheus监控minio

最好把mc生成的配置修改下,把targets改成内部域名,添加 insecure_skip_verify: true避免prometheus提示证书错误问题。最后整理后如下
prometheus-additional.yaml

- job_name: minio-job
  bearer_token: eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJwcm9tZXRoZXVzIiwic3ViIjoibWluaW8iLCJleHAiOjQ5MDYzMTkzMTV9.gcHFAG-N0YZCvUyJPBTuh9-NUR38-IWT5N9kOwRL58z1lmkVAmxWe6Wx_nn_vGLMyYhF92LXNeqir72fK90Irw
  metrics_path: /minio/v2/metrics/cluster
  scheme: https
  tls_config:
    insecure_skip_verify: true
  static_configs:
  - targets: ["minio.minio-tenant.svc.cluster.local"]
- job_name: minio-job-node
  bearer_token: eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJwcm9tZXRoZXVzIiwic3ViIjoibWluaW8iLCJleHAiOjQ5MDYzMzM2MTd9.SA18cLeOCrX-jkYuSRNjYL35e3zv9jA0veUnPxBIXd63pbdx-R2r6hvRjkLF6-WdHZCWhpEscVAqeO5NfnTZNg
  metrics_path: /minio/v2/metrics/node
  scheme: https
  tls_config:
    insecure_skip_verify: true
  static_configs:
  - targets: ["minio.minio-tenant.svc.cluster.local"]
- job_name: minio-job-bucket
  bearer_token: eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJwcm9tZXRoZXVzIiwic3ViIjoibWluaW8iLCJleHAiOjQ5MDYzMzM3MDZ9.s2DeYTJtjzWUuUYnav_kSkvLzxzs9-mHiu0NeiY9RmKoLOQ6KsK5iBI6Wj3srSNj3MyP4NEqXn4tllyHvgP6IQ
  metrics_path: /minio/v2/metrics/bucket
  scheme: https
  tls_config:
    insecure_skip_verify: true
  static_configs:
  - targets: ["minio.minio-tenant.svc.cluster.local"]
- job_name: minio-job-resource
  bearer_token: eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJwcm9tZXRoZXVzIiwic3ViIjoibWluaW8iLCJleHAiOjQ5MDYzMzM3NDN9.uVwcCvs-qGTyysJyIG9ekMsPQ2s2MmFQLYPnnh43JUGiu8sAnjFU-tBE9XgRYsSDpxUcfYmXq_PuKfwTmNknJg
  metrics_path: /minio/v2/metrics/resource
  scheme: https
  tls_config:
    insecure_skip_verify: true
  static_configs:
  - targets: ["minio.minio-tenant.svc.cluster.local"]

k8s minio Operator


正文到此结束
版权声明:若无特殊注明,本文皆为 Myluzh Blog 原创,转载请保留文章出处。
文章内容:https://itho.cn/k8s/534.html
文章标题:《K8S部署minio集群