Docker安装ELK集成镜像并使用filebeat收集springboot日志


下载ELK镜像

搜索镜像,当前最新的版本为8.1.0,下面所有操作均以该版本为准
docker search sebp/elk

NAME           DESCRIPTION                                     STARS               OFFICIAL            AUTOMATED
sebp/elk       Collect, search and visualise log data with …   1172                                    [OK]
sebp/elkx      Collect, search and visualise log data with …   43                                      [OK]

下载镜像
docker pull sebp/elk

修改系统配置

修改eleasticsearch用户权限

vim /etc/security/limits.conf
# 在最后面追加下面内容
elk hard nofile65536
elk soft nofile65536

可以解决ELK启动报: ERROR: Elasticsearch did not exit normally - check the logs at /var/log/elasticsearch/elasticsearch.log错的问题

更改系统vm.max_map_count设置值

max_map_count文件包含限制一个进程可以拥有的VMA(虚拟内存区域)的数量
这个值需要调大一些
临时调整: sysctl -w vm.max_map_count=262144
永久调整: vim /etc/sysctl.conf

# 在末尾加上一行
vm.max_map_count=262144

更新配置
sysctl -p

查看更改后的值
sysctl -a | grep vm.max_map_count

vm.max_map_count = 262144

调大后可以消除elk启动时的错误: bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

拉取配置文件到宿主机

创建配置目录

为了方便修改, 将Docker容器中的配置映射到宿主机

# 创建elk配置文件目录
mkdir -p /data/elk/elasticsearch/data
mkdir -p /data/elk/elasticsearch/log
mkdir -p /data/elk/elasticsearch/config
mkdir -p /data/elk/logstash/log
mkdir -p /data/elk/logstash/config
mkdir -p /data/elk/logstash/conf.d
mkdir -p /data/elk/kibana/log
mkdir -p /data/elk/kibana/data
mkdir -p /data/elk/kibana/config

# 创建文件(通过开启一个容器获取初始配置文件)
docker run -p 5601:5601 -p 9200:9200  -p 5044:5044 \
    -it \
    -e TZ="Asia/Shanghai" \
    -e ES_HEAP_SIZE="4g"  \
    -e ES_JAVA_OPTS="-Xms8g -Xmx8g" \
    -e "discovery.type=single-node" \
    -e LS_HEAP_SIZE="4g" --name elk sebp/elk

# 从容器中复制出ELK配置
docker cp elk:/etc/elasticsearch /data/elk/elasticsearch/config
docker cp elk:/opt/logstash/config /data/elk/logstash/config
docker cp elk:/etc/logstash/conf.d /data/elk/logstash/conf.d
docker cp elk:/opt/kibana/config /data/elk/kibana/config
docker cp elk:/opt/kibana/data /data/elk/kibana/data

# 复制完成后修改目录权限
# 注意,下面的用户id和组id要进入相应容器,输入id查看,以保证容器对宿主机文件的读写权限
cd /data/elk
chown -R 991:991 elasticsearch*
chown -R 992:992 logstash*
chown -R 993:993 kibana*

# 删除容器
docker stop elk
docker rm elk

创建容器

修改配置(可选)

本机是16G, 给eleasticsearch分一半
vim /data/elk/elasticsearch/jvm.options

# 找到
#-Xms4g
#-Xmx4g
# 改为
-Xms8g
-Xmx8g

启动一个新的ELK容器

将elk的配置数据和日志挂载到宿主机器上

docker run -p 5601:5601 -p 9200:9200 -p 9300:9300 -p 5044:5044 -p 9600:9600 \
    --restart always \
    -d \
    -v /data/elk/elasticsearch/data:/var/lib/elasticsearch \
    -v /data/elk/elasticsearch/config:/etc/elasticsearch \
    -v /data/elk/elasticsearch/log:/var/log/elasticsearch \
    -v /data/elk/logstash/config:/opt/logstash/config \
    -v /data/elk/logstash/conf.d:/etc/logstash/conf.d \
    -v /data/elk/logstash/log:/var/log/logstash \
    -v /data/elk/kibana/config:/opt/kibana/config \
    -v /data/elk/kibana/data:/opt/kibana/data \
    -v /data/elk/kibana/log:/var/log/kibana \
    -it \
    -e TZ="Asia/Shanghai" \
    -e ES_JAVA_OPTS="-Xms8g -Xmx8g" \
    -e "discovery.type=single-node" \
    -e LS_HEAP_SIZE="4g" \
    --name elk sebp/elk

查看容器日志
docker logs -f -t --tail=100 elk
进入docker容器
docker exec -it elk /bin/bash

开启elasticsearch x-pack

建议elasticsearch x-pack选项

  1. 开启elasticsearch x-pack安全校验,生成transport证书,并配置

    在启动的容器中操作

    启动并进入容器

    docker exec -it elk /bin/bash

    cd /opt/elasticsearch/bin/
    # 生成CA证书和私钥P12,密码可以默认为空,路径默认:The following command generates a CA certificate and private key in PKCS#12 format
    ./elasticsearch-certutil ca
    # 使用CA证书签发证书和私钥,密码可以默认为空:You can then generate X.509 certificates and private keys by using the new CA. 
    ./elasticsearch-certutil cert --ca elastic-stack-ca.p12
    # 创建证书存储路径:存储在docker挂载目录,方便进行节点同步
     mkdir -p /etc/elasticsearch/cert
     cp ../elastic-* /etc/elasticsearch/cert/
     
    # 手动设置密码
    cd /opt/elasticsearch/bin
    ./elasticsearch-setup-passwords interactive
    # Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
    # You will be prompted to enter passwords as the process progresses.
    # Please confirm that you would like to continue [y/N] 按Y继续
    # 在后面的提示中配置密码,配置密码即可, 会有很多个密码, 都配成跟上一步一样的密码: elastic
  2. 为elasticsearch配置安全认证

    离开容器,在宿主机中配置安全相关选项

    • 修改新建的证书路径的权限

      cd /data/elk/elasticsearch/config
      chown -R 991:991 cert*
    • 打开安全配置

      vim /data/elk/elasticsearch/config/elasticsearch.yml

      # 打开安全
      xpack.security.enabled: true
      # 集群内部通讯加密
      xpack.security.transport.ssl.enabled: true
      xpack.security.transport.ssl.verification_mode: certificate
      xpack.security.transport.ssl.keystore.path: cert/elastic-certificates.p12
      xpack.security.transport.ssl.truststore.path: cert/elastic-certificates.p12
      # 打开api_key认证,用于安装fleet
      xpack.security.authc.api_key.enabled: true

      如果是集群部署,需要将主节点生成的证书文件分发的其他节点,并修改相关transport配置

  3. 为kibana配置安全认证

    在宿主机中配置安全相关选项

    • 打开elasticsearch用户名密码注释,修改密码为上面在elasticsearch中设置的密码

    • 打开server.publicBaseUrl注释,修改为实际的IP端口,消除访问kibana时显示为配置publicBashUrl的警告

    • 进入容器,执行/opt/kibana/bin/kibana-encryption-keys generate,将生成的key添加到kibana的配置文件

      vim /data/elk/kibana/config/kibana.yml

      # encryption keys
      xpack.encryptedSavedObjects.encryptionKey: 92ea860d8d60de0263e098ce2c353a47
      xpack.reporting.encryptionKey: 5b3bacf5cbaa9e922e685e2de13f4dbc
      xpack.security.encryptionKey: a224e3d0fab1c9d3262f3d6411e73f67
      # elk体系有很多的用户组,elastic是默认的用户组之一,可以使用默认的用户,也可以自定义用户
      elasticsearch.username: "kibana_system" 
      elasticsearch.password: "elastic"
      server.publicBaseUrl: "http://172.16.32.115:5601"
  4. 为logstash配置安全认证

    在宿主机中配置安全相关选项

    • 打开Pipeline自动重载配置选项,默认的main pipeline路径为/data/elk/logstash/conf.d/*.conf,开启自动加载后修改main pipeline下的配置会自动生效,

    • 打开elasticsearch监控

      vim /data/elk/kibana/config/kibana.yml

      # 允许自动加载配置
      config.reload.automatic: true
      # 配置刷新时间
      config.reload.interval: 30s
      # X-Pack Monitoring
      # 打开监控,可以在kibana集群监控中查看logstash信息
      # https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
      xpack.monitoring.enabled: true
      xpack.monitoring.elasticsearch.username: logstash_system
      xpack.monitoring.elasticsearch.password: elastic
      #xpack.monitoring.elasticsearch.proxy: ["http://proxy:port"]
      xpack.monitoring.elasticsearch.hosts: ["http://localhost:9200"]
    • 修改默认的logstash elasticsearch output用户名密码为上面在elasticsearch中设置的密码

      vim /data/elk/logstash/conf.d/30-output.conf

      output {
        elasticsearch {
          # 添加id便于管道监控
          id => "baseElasticsearch"
          hosts => ["localhost"]
          user => "elastic"
          password => "elastic"
          manage_template => false
          index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
        }
      }
  5. 最终删除无用注释的纯净配置文件如下

    • elasticsearch.yml

      node.name: elk
      path.repo: /var/backups
      network.host: 0.0.0.0
      cluster.initial_master_nodes:
        - elk
      xpack.security.enabled: true
      xpack.security.transport.ssl.enabled: true
      xpack.security.transport.ssl.verification_mode: certificate
      xpack.security.transport.ssl.keystore.path: cert/elastic-certificates.p12
      xpack.security.transport.ssl.truststore.path: cert/elastic-certificates.p12
      xpack.security.authc.api_key.enabled: true
    • kibana.yml

      server.host: 0.0.0.0
      server.publicBaseUrl: 'http://172.16.32.115:5601'
      elasticsearch.username: kibana_system
      elasticsearch.password: elastic
      xpack.encryptedSavedObjects.encryptionKey: 92ea860d8d60de0263e098ce2c353a47
      xpack.reporting.encryptionKey: 5b3bacf5cbaa9e922e685e2de13f4dbc
      xpack.security.encryptionKey: a224e3d0fab1c9d3262f3d6411e73f67
    • logstash.yml

      config.reload.automatic: true
      config.reload.interval: 30s
      xpack.monitoring.enabled: true
      xpack.monitoring.elasticsearch.username: logstash_system
      xpack.monitoring.elasticsearch.password: elastic
      xpack.monitoring.elasticsearch.hosts:
        - 'http://localhost:9200'

使用Docker compose安装ELK

由于arm平台不支持 sebp/elk 镜像,可以使用ELK提供的官方镜像安装,并用Docker compose编排,此种方式可以安装最新版本的ELK,如下的安装过程使用的便是8.3.2版本

创建配置目录

为了方便修改, 将Docker容器中的配置映射到宿主机

# 创建elk配置文件目录
mkdir -p /data/elk/elasticsearch/logs
mkdir -p /data/elk/logstash/logs
mkdir -p /data/elk/kibana/logs

创建临时容器

先启动一套临时的ELK,用于复制出配置文件,并映射到宿主机,对应的docker-compose.yml如下

version: "3.8"
services:
  elasticsearch:
    image: elasticsearch:${ELK_VERSION}
    container_name: elasticsearch-${ELK_VERSION}
    environment:
      - TZ=Asia/Shanghai
      - discovery.type:single-node
      - ES_HEAP_SIZE:4g
      - ES_JAVA_OPTS:-Xms8g -Xmx8g
    ports:
      - "9200:9200"
      - "9300:9300"
    volumes:
      - /etc/localtime:/etc/localtime
      - /etc/timezone:/etc/timezone
    networks:
      - elk-net
    ulimits:
      memlock:
        soft: -1
        hard: -1
    restart: always
  logstash:
    image: logstash:${ELK_VERSION}
    container_name: logstash-${ELK_VERSION}
    environment:
      - TZ=Asia/Shanghai
      - LS_HEAP_SIZE:4g
    depends_on:
      - elasticsearch
    ports:
      - "5044:5044"
      - "9600:9600"
    volumes:
      - /etc/localtime:/etc/localtime
      - /etc/timezone:/etc/timezone
    networks:
      - elk-net
    ulimits:
      memlock:
        soft: -1
        hard: -1
    restart: always
  kibana:
    image: kibana:${ELK_VERSION}
    container_name: kibana-${ELK_VERSION}
    environment:
      - TZ=Asia/Shanghai
    depends_on:
      - elasticsearch
    ports:
      - "5601:5601"
    volumes:
      - /etc/localtime:/etc/localtime
      - /etc/timezone:/etc/timezone
    networks:
      - elk-net
    ulimits:
      memlock:
        soft: -1
        hard: -1
    restart: always
networks:
  elk-net:
    driver: bridge

同时需要指定docker-compose的环境变量:一个.env文件,如下

ELK_VERSION = 8.3.2 

将docker-compose.yml和.env上传到宿主机的/data/elk目录,启动docker-compose

cd /data/elk
docker-compose up -d

映射配置文件

查看日志

docker logs -f elasticsearch-8.3.2

未发现报错,启动成功后,复制容器的配置文件到宿主机,然后停止docker-compose,并删除临时镜像

# 复制初始配置文件
docker cp elasticsearch-8.3.2:/usr/share/elasticsearch/config /data/elk/elasticsearch
docker cp elasticsearch-8.3.2:/usr/share/elasticsearch/data /data/elk/elasticsearch
docker cp logstash-8.3.2:/usr/share/logstash/config /data/elk/logstash
docker cp logstash-8.3.2:/usr/share/logstash/pipeline /data/elk/logstash
docker cp kibana-8.3.2:/usr/share/kibana/config /data/elk/kibana
docker cp kibana-8.3.2:/usr/share/kibana/data /data/elk/kibana

# 修改文件夹权限
# 注意,下面的用户id和组id要进入相应容器,输入id查看,以保证容器对宿主机文件的读写权限
chown -R 1000:1000 /data/elk  

# 停止临时容器,并删除
docker-compose stop
docker rm logstash-8.3.2 elasticsearch-8.3.2 kibana-8.3.2

启动容器compose

修改docker-compose.yml为如下,相关修改主要是为了映射文件到宿主机,便于修改查看

version: "3.8"
services:
  elasticsearch:
    image: elasticsearch:${ELK_VERSION}
    container_name: elasticsearch-${ELK_VERSION}
    environment:
      - TZ=Asia/Shanghai
      - discovery.type:single-node
      - ES_HEAP_SIZE:4g
      - ES_JAVA_OPTS:-Xms8g -Xmx8g
    ports:
      - "9200:9200"
      - "9300:9300"
    volumes:
      - /etc/localtime:/etc/localtime
      - /etc/timezone:/etc/timezone
      - /data/elk/elasticsearch/data:/usr/share/elasticsearch/data
      - /data/elk/elasticsearch/config:/usr/share/elasticsearch/config
      - /data/elk/elasticsearch/logs:/usr/share/elasticsearch/logs
    networks:
      - elk-net
    ulimits:
      memlock:
        soft: -1
        hard: -1
    restart: always
  logstash:
    image: logstash:${ELK_VERSION}
    container_name: logstash-${ELK_VERSION}
    environment:
      - TZ=Asia/Shanghai
      - LS_HEAP_SIZE:4g
    depends_on:
      - elasticsearch
    ports:
      - "5044:5044"
      - "9600:9600"
    volumes:
      - /etc/localtime:/etc/localtime
      - /etc/timezone:/etc/timezone
      - /data/elk/logstash/config:/usr/share/logstash/config
      - /data/elk/logstash/pipeline:/usr/share/logstash/pipeline
      - /data/elk/logstash/logs:/usr/share/logstash/logs
    networks:
      - elk-net
    ulimits:
      memlock:
        soft: -1
        hard: -1
    restart: always
  kibana:
    image: kibana:${ELK_VERSION}
    container_name: kibana-${ELK_VERSION}
    environment:
      - TZ=Asia/Shanghai
    depends_on:
      - elasticsearch
    ports:
      - "5601:5601"
    volumes:
      - /etc/localtime:/etc/localtime
      - /etc/timezone:/etc/timezone
      - /data/elk/kibana/config:/usr/share/kibana/config
      - /data/elk/kibana/data:/usr/share/kibana/data
      - /data/elk/kibana/logs:/usr/share/kibana/logs
    networks:
      - elk-net
    ulimits:
      memlock:
        soft: -1
        hard: -1
    restart: always
networks:
  elk-net:
    driver: bridge
cd /data/elk
docker-compose up -d

配置

  1. 登录kibana网页控制台:http://host:5601,新版本的ELK默认开启了x-pack,页面显示需要输入Enrollment token,进入elasticsearch容器中,执行相关命令获取准入token,正确填写确认

       docker exec -it elasticsearch-8.3.2 /bin/bash
        
       bin/elasticsearch-create-enrollment-token --scope kibana
       exit
    
    2. 按照页面提示,进入kibana容器中获取验证码,正确填写确认
    
       ```shell
       docker exec -it kibana-8.3.2  /bin/bash
       
       bin/kibana-verification-code
       exit
  2. 页面校验加载成功后会显示用户名密码登录界面,进入elasticsearch容器中,设置密码

    docker exec -it elasticsearch-8.3.2 /bin/bash
    
    bin/elasticsearch-setup-passwords interactive
    # Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
    # You will be prompted to enter passwords as the process progresses.
    # Please confirm that you would like to continue [y/N] 按Y继续
    # 在后面的提示中配置密码,配置密码即可, 会有很多个密码, 都配成跟上一步一样的密码: elastic
    exit
  3. 输入设置的用户名密码登录

  4. 相关安辅助安全配置

    参见为kibana配置安全认证为elasticsearch配置安全认证,大同小异,按照实际情况配置。其中kibana的安全认证会自动配置,只需添加server.publicBaseUrl、xpack.encryptedSavedObjects.encryptionKey、xpack.reporting.encryptionKey、xpack.security.encryptionKey。由于该版本elasticsearch默认启用了https,配置logstash监控时需要设置easticsearch服务的验证证书,该证书的路径为/data/elk/elasticsearch/config/certs/http_ca.crt,将该证书拷贝到logstash的config目录,修改权限chown -R 1000:1000 http_ca.crt,并添加到配置文件。

  5. 重启compose,登录页面到进行后续操作,添加Fleet agent等等

  6. 最终删除无用注释的纯净配置文件如下

    • elasticsearch.yml

      cluster.name: docker-cluster
      network.host: 0.0.0.0
      xpack.security.enabled: true
      xpack.security.enrollment.enabled: true
      xpack.security.http.ssl:
        enabled: true
        keystore.path: certs/http.p12
      xpack.security.transport.ssl:
        enabled: true
        verification_mode: certificate
        keystore.path: certs/transport.p12
        truststore.path: certs/transport.p12
      cluster.initial_master_nodes:
        - f9916c4ecab7
    • kibana.yml

      server.host: 0.0.0.0
      server.shutdownTimeout: 5s
      elasticsearch.hosts:
        - 'https://172.20.0.2:9200'
      monitoring.ui.container.elasticsearch.enabled: true
      elasticsearch.serviceAccountToken: >-
        AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE2NTg1OTgzMzk4MzI6UmdEcXpZd3JRczZNUkp5dllwMzRSZw
      elasticsearch.ssl.certificateAuthorities:
        - /usr/share/kibana/data/ca_1658598340945.crt
      xpack.fleet.outputs:
        - id: fleet-default-output
          name: default
          is_default: true
          is_default_monitoring: true
          type: elasticsearch
          hosts:
            - 'https://172.20.0.2:9200'
          ca_trusted_fingerprint: c11f2c96cc388bbf8432246cc37015b27b35c8bb0fa1fcac5c1e9d78cf968846
      server.publicBaseUrl: 'http://jhzhang.top:5601'
      xpack.encryptedSavedObjects.encryptionKey: cb990a800a6cf0def51a4ec1f9d7ac23
      xpack.reporting.encryptionKey: 1924a86eb17aef3053666a8e29fd42cf
      xpack.security.encryptionKey: 768aee2b2087e3b21d8c5cc92ee57208
    • logstash.yml

      http.host: 0.0.0.0
      config.reload.automatic: true
      config.reload.interval: 30s
      xpack.monitoring.enabled: true
      xpack.monitoring.elasticsearch.username: logstash_system
      xpack.monitoring.elasticsearch.password: elastic
      xpack.monitoring.elasticsearch.hosts:
        - 'https://172.20.0.2:9200'
      xpack.monitoring.elasticsearch.ssl.certificate_authority: /usr/share/logstash/config/http_ca.crt

安装并配置filebeat

注意

不建议独立安装各种beat,应使用ELK推荐的Fleet agent添加不同集成的方式收集日志,具体方法参见页面引导,一步步完成

安装

下面以centOs6.9为例,其他客户端操作系统按照相关文档安装配置

https://www.elastic.co/cn/downloads/past-releases#filebeat下载页面选择于ELK一致的版本,此处选择8.1.0版本,并依据不同的操作系统下载不同的文件,以centOs为例,下载 filebeat-8.1.0-x86_64.rpm ,上传到需要采集日志到服务器,使用yum命令安装

yum install ./filebeat-8.1.0-x86_64.rpm

配置

  1. 创建inputs配置文件夹,便于动态修改配置文件

    mkdir -p /etc/filebeat/input.d

  2. 修改配置文件,启用动态配置文件加载,修改输出为logstash

    vim /etc/filebeat/filebeat.yml

    # 动态加载外部配置,如此模块化处理更好,注意:主配置不能动态加载
    filebeat.config.inputs:
      enabled: true
      path: ${path.config}/input.d/*.yml
      reload.enabled: true
      reload.period: 10s
      
    # 设置dashboards加载到kibana
    setup.dashboards.enabled: true
    # 设置加载dashboars kibana,注意,username必须使用超级用户elastic,不然会报error loading index pattern: returned 403 to import file: 
    setup.kibana:
      host: "172.16.32.115:5601"
      protocol: "http"
      username: elastic
      password: elastic
      
    # 关闭默认的elasticsearch output
    #output.elasticsearch:
      # Array of hosts to connect to.
      #hosts: ["localhost:9200"]
    
    # 打开X-Pack Monitoring,可以在kibana集群监控中查看filebeat信息,注意:cluster_uuid需要通过eleasticsearch 9200接口查询
    monitoring.enabled: true
        hosts: ["http://172.16.32.115:9200"]
        cluster_uuid: wwoRAYWFQw6COjmKOU4taA
        username: beats_system
        password: elastic
        
    # 打开logstash output
    output.logstash:
      # The Logstash hosts
      hosts: ["172.16.32.115:5044"]

实践

  1. 配置filebeat读取springboot项目日志,在上面配置的外部配置文件夹中创建一个yml文件,配置input

    vim /etc/filebeat/input.d/springbootservice.yml

    # 使用新的文件类型filestream
    - type: filestream
      enabled: true
      paths:
      	# 一个springboot项目的日志路径
        - /home/ykt/tendyron/sc-api-service/log/spring.log
      parsers:
      	# 合并不是以yyyy-MM-dd开始的行到上一行
        - multiline:
            type: pattern
            pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
            negate: true
            match: after
      fields:
      	# 添加一个服务名字段,用于后面logstash推送到不同的索引
        service_name: sc-api-service
    
    - type: filestream
      enabled: true
      paths:
        - /home/ykt/tendyron/tdr-gateway-service/log/spring.log
      parsers:
        - multiline:
            type: pattern
            pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
            negate: true
            match: after
      fields:
        service_name: tdr-gateway-service
    
    - type: filestream
      enabled: true
      paths:
        - /home/ykt/tendyron/tdr-tcp-gateway-service/log/spring.log
      parsers:
        - multiline:
            type: pattern
            pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
            negate: true
            match: after
      fields:
        service_name: tdr-tcp-gateway-service
  2. 在logstash部署机器上修改logstash的beats-input,处理input,例子如下

    vim /data/elk/logstash/conf.d/02-beats-input.conf

    # 输入,各种beat通过5044端口输入
    input {
      beats {
        id => "baseBeat"
        port => 5044
        ssl => false
        #ssl_certificate => "/etc/pki/tls/certs/logstash-beats.crt"
        #ssl_key => "/etc/pki/tls/private/logstash-beats.key"
      }
    }
    
    filter {
        # 解析带sleuth的spring boot默认日志
        grok {
            id => "springBootSleuthLogGrok"
            match => {
                 # 将filebeat传过来的spring boot sleuth日志中的message,也就是报文使用grok过滤器切分出特定字段,用于后续分析,grok中DATAGREEYDATA有明显区别,GREEYDATA会将标识格式的特殊字符也匹配上,导致错误,例如trace_id和span_id这种,就会忽略后面的“,”
                "message" => "%{TIMESTAMP_ISO8601:log_timestamp}\s+%{LOGLEVEL:log_level}\s+\[%{DATA:service_name},%{DATA:trace_id},%{DATA:span_id}\]\s+%{NUMBER:pid}\s+---\s+\[%{DATA:thread_name}\]\s+%{JAVACLASS:class_name}\s+:\s+(?<log_message>(.|\r|\n)*)"
            }
        }  
    
        # date过滤器,将日志中解析出来的时间替换为logstash的时间戳,解决时间不一致的问题,filte targe默认为@timestamp
        date {
            id => "springBootSleuthLogDate"
            match => ["log_timestamp", "yyyy-MM-dd HH:mm:ss.SSS", "ISO8601"]  
        }
    
    
        # 删除中间字段log_timestamp
        mutate {
            id => "springBootSleuthLogMutate"
            remove_field => ["log_timestamp"]
        }
    
    }
    
    
    # 输出,和默认的30-output.conf互不影响,此处使用filebeat中配置的service_name生成不同服务的索引
    output {
      elasticsearch {
        id => "springBootSleuthLogElasticsearch"
        hosts => ["http://127.0.0.1:9200"]
        user => "elastic"
        password => "elastic"
        index => "%{[fields][service_name]}-%{+YYYY.MM.dd}"
      }
    }

验证

使用elastic用户登录kibana

访问http://172.16.32.115:5601/app/monitoring可以查看集群监控信息

访问http://172.16.32.115:5601/app/management/kibana/dataViews可以添加Data Views用于discover

访问http://172.16.32.115:5601/app/discover可以查看添加的Data Views并进行相关操作

参考文档

Kibana文档:https://www.elastic.co/guide/en/kibana/current/index.html

Logstash文档: https://www.elastic.co/guide/en/logstash/current/index.html

ElasticSearch文档: https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html

Filebeat文档:https://www.elastic.co/guide/en/beats/filebeat/current/index.html

docker比较好的文档:http://docker.baoshu.red/


文章作者: jhzhang_09
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 jhzhang_09 !
评论
  目录