Myluzh Blog

K8S部署Fluentd日志采集并推送到Elastic

发布时间: 2023-12-19 文章作者: myluzh 分类名称: Kubernetes 朗读文章


0x01 前言
使用 fluentd 可以从每个容器收集 stdout/stderr 日志,以及在每个主机上 /var/log/containers/ 路径下的日志文件,日志将被发送到被选择的目标服务。
这里主要记录下Fluentd的部署过程,如何通过Fluentd把K8S集群Pod日志推送到Elastic上,Elastic和Kibana的部署不在这里赘述。

0x02 部署 Fluentd
1、fluentd-config-map.yaml
配置Fluentd日志收集器的配置文件。
apiVersion: v1
data:
  fluent.conf: |-
        <source>
          @type tail
          #path /var/log/containers/*.log
          path /var/log/containers/*xfshcloud-dxp*.log
          pos_file fluentd-docker.pos
          tag kubernetes.*
          #<parse>
            #@type multi_format
          #</parse>
          <parse>
            @type multiline
             format_firstline /^\{.*\}$/
             format1 /^(?<log>.*)$/
          </parse>
        </source>

        <match **>
          @type elasticsearch
          @id out_es
          @log_level info
          include_tag_key true
          host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
          port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
          path "#{ENV['FLUENT_ELASTICSEARCH_PATH']}"
          scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
          ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
          ssl_version "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERSION'] || 'TLSv1_2'}"
          user "#{ENV['FLUENT_ELASTICSEARCH_USER'] || use_default}"
          password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD'] || use_default}"
          reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'false'}"
          reconnect_on_error "#{ENV['FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR'] || 'true'}"
          reload_on_failure "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE'] || 'true'}"
          log_es_400_reason "#{ENV['FLUENT_ELASTICSEARCH_LOG_ES_400_REASON'] || 'false'}"
          logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'dapr'}"
          logstash_dateformat "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_DATEFORMAT'] || '%Y.%m.%d'}"
          logstash_format "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT'] || 'true'}"
          index_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME'] || 'dapr'}"
          type_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_TYPE_NAME'] || 'fluentd'}"
          include_timestamp "#{ENV['FLUENT_ELASTICSEARCH_INCLUDE_TIMESTAMP'] || 'false'}"
          template_name "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_NAME'] || use_nil}"
          template_file "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_FILE'] || use_nil}"
          template_overwrite "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_OVERWRITE'] || use_default}"
          sniffer_class_name "#{ENV['FLUENT_SNIFFER_CLASS_NAME'] || 'Fluent::Plugin::ElasticsearchSimpleSniffer'}"
          request_timeout "#{ENV['FLUENT_ELASTICSEARCH_REQUEST_TIMEOUT'] || '5s'}"
          <buffer>
            flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
            flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"
            chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '2M'}"
            queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
            retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
            retry_forever true
          </buffer>
        </match>
kind: ConfigMap
metadata:
  name: fluentd-config
  namespace: cattle-logging

2、fluentd-daemonset-with-rbac.yaml
fluentd作为守护进程,部署在每个k8s node上。
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  # replace with namespace where fluentd is deployed
  namespace: cattle-logging

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluentd
  # replace with namespace where fluentd is deployed
  namespace: cattle-logging
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - namespaces
  verbs:
  - get
  - list
  - watch

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd
  namespace: default
roleRef:
  kind: ClusterRole
  name: fluentd
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: fluentd
  # replace with namespace where fluentd is deployed
  namespace: cattle-logging
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  # replace with namespace where fluentd is deployed
  namespace: cattle-logging
  labels:
    k8s-app: fluentd-logging
    version: v1
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-logging
      version: v1
  template:
    metadata:
      labels:
        k8s-app: fluentd-logging
        version: v1
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:v1.9.2-debian-elasticsearch7-1.0
        env:
          # Change to your Elastic configuration
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "172.16.12.66"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          - name: FLUENT_ELASTICSEARCH_USER
            value: "elastic"
          - name: FLUENT_ELASTICSEARCH_PASSWORD
            value: "SaRorwWAC9aSOR6asyBD"
          - name: FLUENT_UID
            value: "0"
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: fluentd-config
          mountPath: /fluentd/etc
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: fluentd-config
        configMap:
          name: fluentd-config

0x03 最后
1、fluetd容器产生日志过大问题
随时间增长,fluetd container 产生的log文件会达到了好几十个G,tail -f看了下log文件发现log级别为info。
官方文档fluentd默认日志等级为info https://docs.fluentd.org/deployment/logging,在info级别下,集群中容器越多生成的log越大,所以需要将日志等级提高。
将下面的配置写入到配置文件即可,这样fluetd就只产生error日志,推荐在debug阶段后就把log_level改为error。
<system>
 log_level error
</system>
2、
0x04 参考链接
k8s fluentd产生log过大问题解决 https://blog.csdn.net/u013352037/article/details/112174947

标签: fluentd elastic kibana

发表评论