Filebeat-配置文件摘录

本文为Filebeat配置文件的摘录,有简单注释,主要个人记录,方便之后查询使用。
注意: 本文中使用版本:6.8.1,CentOS7环境测试运行。

最近需要做平台整体的日志采集,因此测试选型Filebeat进行日志采集。Elastic Stack提供多种BEATS轻量型数据采集器,Filebeat是轻量级的日志数据采集器。其作为服务器上的agent安装,Filebeat监视日志目录或特定日志文件,tail file模式收集日志文件,并将它们转发给Elasticsearch或Logstash等。官方参考:FILEBEAT
此文为filebeat主要配置文件直接复制过来的,其中也有部分对于配置文件的简单注释和使用例子,主要在 Log input 部分。此配置文件是包含所有配置,并不是合理配置,有些内容同时配置会用冲突,所以生产需要注意选择合理的配置。提交此文目的是之后需要查询的时候用,便于自己查询,以后在使用的过程中也会进一步丰富这里的注释和例子,加强对于filebeat采集功能的理解。因为在增加注释的时候对于配置文件的排版有所变化,所以不能直接复制做配置文件。

摸鱼文,以下通篇为配置文件:

######################## Filebeat Configuration ############################
# This file is a full configuration example documenting all non-deprecated
# options in comments. For a shorter configuration example, that contains only
# the most common options, please see filebeat.yml in the same directory.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
#==========================  Modules configuration =============================
#Filebeat modules提供了一种快速处理常见日志格式的方法。它们包含默认配置,Elasticsearch提取节点管道定义和Kibana dashboards,可快速实施和部署日志监控解决方案。
#以下就是各种常用日志的modules,对于modules不支持的log采集,需要手动配置input。
# 默认module配置监控的日志为各种日志的默认目录,要更改默认配置需要指定变量-使用var.paths。
filebeat.modules:
#-------------------------------- System Module --------------------------------
#- module: system
  # Syslog
  #syslog:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false
    # Input configuration (advanced). Any input configuration option
    # can be added under this section.
    #input:
  # Authorization logs
  #auth:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false
    # Input configuration (advanced). Any input configuration option
    # can be added under this section.
    #input:
#-------------------------------- Apache2 Module --------------------------------
#- module: apache2
  # Access logs
  #access:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Input configuration (advanced). Any input configuration option
    # can be added under this section.
    #input:
  # Error logs
  #error:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Input configuration (advanced). Any input configuration option
    # can be added under this section.
    #input:
#-------------------------------- Auditd Module --------------------------------
#- module: auditd
  #log:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Input configuration (advanced). Any input configuration option
    # can be added under this section.
    #input:
#----------------------------- Elasticsearch Module -----------------------------
- module: elasticsearch
  # Server log
  server:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false
  gc:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
  audit:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false
  slowlog:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false
  deprecation:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false

#-------------------------------- Haproxy Module --------------------------------
- module: haproxy
  # All logs
  log:
    enabled: true
    # Set which input to use between syslog (default) or file.
    #var.input:
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
#-------------------------------- Icinga Module --------------------------------
#- module: icinga
  # Main logs
  #main:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Input configuration (advanced). Any input configuration option
    # can be added under this section.
    #input:
  # Debug logs
  #debug:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Input configuration (advanced). Any input configuration option
    # can be added under this section.
    #input:
  # Startup logs
  #startup:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Input configuration (advanced). Any input configuration option
    # can be added under this section.
    #input:
#---------------------------------- IIS Module ----------------------------------
#- module: iis
  # Access logs
  #access:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Input configuration (advanced). Any input configuration option
    # can be added under this section.
    #input:
  # Error logs
  #error:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Input configuration (advanced). Any input configuration option
    # can be added under this section.
    #input:
#------------------------------- Iptables Module -------------------------------
- module: iptables
  log:
    enabled: true
    # Set which input to use between syslog (default) or file.
    #var.input:
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
#--------------------------------- Kafka Module ---------------------------------
- module: kafka
  # All logs
  log:
    enabled: true
    # Set custom paths for Kafka. If left empty,
    # Filebeat will look under /opt.
    #var.kafka_home:
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false
#-------------------------------- Kibana Module --------------------------------
- module: kibana
  # All logs
  log:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
#------------------------------- Logstash Module -------------------------------
#- module: logstash
  # logs
  #log:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    # var.paths:
  # Slow logs
  #slowlog:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
#-------------------------------- Mongodb Module --------------------------------
#- module: mongodb
  # Logs
  #log:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Input configuration (advanced). Any input configuration option
    # can be added under this section.
    #input:
#--------------------------------- MySQL Module ---------------------------------
#- module: mysql
  # Error logs
  #error:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Input configuration (advanced). Any input configuration option
    # can be added under this section.
    #input:
  # Slow logs
  #slowlog:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Input configuration (advanced). Any input configuration option
    # can be added under this section.
    #input:
#--------------------------------- Nginx Module ---------------------------------
#- module: nginx
  # Access logs
  #access:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Input configuration (advanced). Any input configuration option
    # can be added under this section.
    #input:
    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false
  # Error logs
  #error:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Input configuration (advanced). Any input configuration option
    # can be added under this section.
    #input:
    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false
#-------------------------------- Osquery Module --------------------------------
- module: osquery
  result:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # If true, all fields created by this module are prefixed with
    # `osquery.result`. Set to false to copy the fields in the root
    # of the document. The default is true.
    #var.use_namespace: true
#------------------------------ PostgreSQL Module ------------------------------
#- module: postgresql
  # Logs
  #log:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Input configuration (advanced). Any input configuration option
    # can be added under this section.
    #input:
#--------------------------------- Redis Module ---------------------------------
#- module: redis
  # Main logs
  #log:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths: ["/var/log/redis/redis-server.log*"]
  # Slow logs, retrieved via the Redis API (SLOWLOG)
  #slowlog:
    #enabled: true
    # The Redis hosts to connect to.
    #var.hosts: ["localhost:6379"]
    # Optional, the password to use when connecting to Redis.
    #var.password:
#------------------------------- Suricata Module -------------------------------
- module: suricata
  # All logs
  eve:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
#-------------------------------- Traefik Module --------------------------------
#- module: traefik
  # Access logs
  #access:
    #enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
    # Input configuration (advanced). Any input configuration option
    # can be added under this section.
    #input:

#=========================== Filebeat inputs =============================
# 自定义input,指定Filebeat如何定位和处理输入数据。可以指定多个input,并且可以多次指定相同的输入类型。
# List of inputs to fetch data.
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
# Type of the files. Based on this the way the file is read is decided.
# The different types cannot be mixed in one input
#
# Possible options are:
# * log: Reads every line of the log file (default)
# * stdin: Reads the standard in
# 注意:
# Make sure a file is not defined more than once across all inputs because this can lead to unexpected behaviour.一个log只能设置一个input。
# When dealing with file rotation, avoid harvesting symlinks. Instead use the pathsedit setting to point to the original file, and specify a pattern that matches the file you want to harvest and all of its rotated files.
#------------------------------ Log input --------------------------------
#类型为log
- type: log
  # Change to true to enable this input configuration.
  enabled: true
  # Paths that should be crawled and fetched. Glob based paths.
  # To fetch all ".log" files from a specific level of subdirectories
  # /var/log/*/*.log can be used.
  # For each file found under this path, a harvester is started.
  # Make sure not file is defined twice as this can lead to unexpected behaviour.
  # 采集log路径,可以设置多个。
  # 每个采集的文件都会启动一个harvester。
  paths:
    - /var/log/*.log
    - /var/log/*/*.log
    #- c:\programdata\elasticsearch\logs\*
  ### Recursive glob configuration
  # 注意:设置recursive_glob参数控制是否递归获取(默认生效),即**生效,如设置/opt/allLogs/**/*.log。
  # 关闭则设置/opt/allLogs/*/*.log,则只监控1级子目录,其他均不监控。
  # 扩展:/foo/** expands to /foo, /foo/*, /foo/*/*, and so on,递归最大8-level deep。
  # Expand "**" patterns into regular glob patterns.
  #recursive_glob.enabled: true
  # Configure the file encoding for reading files with international characters
  # following the W3C recommendation for HTML5 (http://www.w3.org/TR/encoding).
  # Some sample encodings:
  #   plain, utf-8, utf-16be-bom, utf-16be, utf-16le, big5, gb18030, gbk,
  #    hz-gb-2312, euc-kr, euc-jp, iso-2022-jp, shift-jis, ...
  # 指定被监控的文件的编码类型,使用plain和utf-8都是可以处理中文日志的。
  #encoding: plain
  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list. The include_lines is called before
  # exclude_lines. By default, no lines are dropped.
  # 【黑名单】排除符合正则表达式列表的那些行。以下例子为:删除任何以DBG开头的行。
  #exclude_lines: ['^DBG']
  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list. The include_lines is called before
  # exclude_lines. By default, all the lines are exported.
  # 【白名单】仅导出与列表中的正则表达式匹配的行。若include_lines、exclude_lines两个参数同时设置,include_lines执行完毕之后会执行exclude_lines,无论定义顺序。即先白名单,再黑名单。
  # 例如同时设置:include_lines:['sometext']、exclude_lines:['^ DBG'],含义为导出包含的所有日志行sometext,但以DBG(调试消息)开头的行除外。
  # 以下例子:导出以ERR或WARN开头的所有行。或者:  include_lines: [ '^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2},[0-9]{3} +ERR|WARN’],表示导出2019-06-21 18:33:06,463 WARN或ERR开头的日志。
  #include_lines: ['^ERR', '^WARN']
  
  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  # 忽略掉符合正则表达式列表的文件。
  #exclude_files: ['.gz$']
  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  # 向输出的每一条日志添加额外的信息,比如“level:debug”,方便后续对日志进行分组统计。
  # 默认情况下,会在输出信息的fields子目录下以指定的新增fields建立子目录,例如fields.level
  # 这个得意思就是会在es中多添加一个字段,格式为 "filelds":{"level":"debug"}
  #fields:
  #  level: debug
  #  review: 1
  # Set to true to store the additional fields as top level fields instead
  # of under the "fields" sub-dictionary. In case of name conflicts with the
  # fields added by Filebeat itself, the custom fields overwrite the default
  # fields.
  # 如果该选项设置为true,则新增fields成为顶级目录,而不是将其放在fields目录下。 自定义的field会覆盖filebeat默认的field.
  # 如果设置为true,则在es中新增的字段格式为:"level":"debug"。为false,显示"fields":{"level":"debug"}。
  #fields_under_root: false
  # Ignore files which were modified more then the defined timespan in the past.
  # ignore_older is disabled by default, so no files are ignored by setting it to 0.
  # Time strings like 2h (2 hours), 5m (5 minutes) can be used.
  # 可以指定Filebeat忽略指定时间段以外修改的日志内容,比如2h(两个小时)或者5m(5分钟)。
 # If this option is enabled, Filebeat ignores any files that were modified before the specified timespan. Configuring ignore_older can be especially useful if you keep log files for a long time. For example, if you want to start Filebeat, but only want to send the newest files and files from last week, you can configure this option.
 # 注意:You must set ignore_older to be greater than close_inactive.
 # The files affected by this setting fall into two categories:* Files that were never harvested;* Files that were harvested but weren’t updated for longer than ignore_older。
 # 文件被忽略之前,确保文件不在被读取,必须设置ignore older时间范围大于close_inactive。如果一个文件正在读取时候被设置忽略,它会取得到close_inactive后关闭文件,然后文件被忽略。详情见官网:https://www.elastic.co/guide/en/beats/filebeat/6.8/filebeat-input-log.html#filebeat-input-log-ignore-older。
  #ignore_older: 0
  # How often the input checks for new files in the paths that are specified
  # for harvesting. Specify 1s to scan the directory as frequently as possible
  # without causing Filebeat to scan too frequently. Default: 10s.
  # Filebeat以多快的频率去prospector指定的目录下面检测文件更新(比如是否有新增文件)
  # 如果设置为0s,则Filebeat会尽可能快地感知更新(占用的CPU会变高)。默认是10s
  #scan_frequency: 10s
  # Defines the buffer size every harvester uses when fetching the file
  # 每个harvester在获取文件时使用的缓冲区大小(以字节为单位)。默认值为16384。
  #harvester_buffer_size: 16384
  # Maximum number of bytes a single log event can have
  # All bytes after max_bytes are discarded and not sent. The default is 10MB.
  # This is especially useful for multiline log messages which can get large.
  # 日志文件中增加一行算一个日志事件,max_bytes限制单个日志消息可以具有的最大字节数,多出的字节会被丢弃而不被发送,此设置对于可能变大的多行日志消息特别有用。默认值为10MB(10485760)。
  #max_bytes: 10485760
  ### JSON configuration
  # 使Filebeat可以解码构造为JSON消息的日志。Filebeat逐行处理日志,只对每行有一个JSON对象时JSON解码才有效。
  # 解码再filter和multiline前生效。
  # The decoding happens before line filtering and multiline. 
  # You can combine JSON decoding with filtering and multiline if you set the message_key option. 
  # This can be helpful in situations where the application logs are wrapped in JSON objects, as with like it happens for example with Docker.
 
  # Decode JSON options. Enable this if your logs are structured in JSON.
  # JSON key on which to apply the line filtering and multiline settings. This key
  # must be top level and its value must be string, otherwise it is ignored. If
  # no text key is defined, the line filtering and multiline features cannot be used.
  # An optional configuration setting that specifies a JSON key on which to apply the line filtering and multiline settings. If specified the key must be at the top level in the JSON object and the value associated with the key must be a string, otherwise no filtering or multiline aggregation will occur.
  #json.message_key:
  # 您必须至少指定以下设置之一才能启用JSON解析模式
  # By default, the decoded JSON is placed under a "json" key in the output document.
  # If you enable this setting, the keys are copied top level in the output document.
  # 默认情况下,解码的JSON位于输出文档中的“json”键下。如果启用此设置,则会将键复制到输出文档的顶层。默认值为false。
  #json.keys_under_root: false
  # If keys_under_root and this setting are enabled, then the values from the decoded
  # JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.)
  # in case of conflicts.
  # 如果keys_under_root启用此设置,则解码的JSON对象中的值将覆盖Filebeat通常添加的字段(类型,源,偏移等),以防发生冲突。
  #json.overwrite_keys: false
  # If this setting is enabled, Filebeat adds a "error.message" and "error.key: json" key in case of JSON
  # unmarshaling errors or when a text key is defined in the configuration but cannot
  # be used.
  # 如果启用此设置,Filebeat会在JSON解组错误或message_key在配置中定义但无法使用时添加“error.message”和“error.type:json”键。
  #json.add_error_key: false
  #可选配置设置,指定是否应记录JSON解码错误。如果设置为true,则不会记录错误。默认值为false。
  #An optional configuration setting that specifies if JSON decoding errors should be logged or not. If set to true, errors will not be logged. The default is false.
  #json.ignore_decoding_error(官网有写,此例子里没写)
  
  ### Multiline options
  # 控制Filebeat如何处理跨越多行的日志消息的选项。
  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation
  # The regexp Pattern that has to be matched. 
  #  Note that the regexp patterns supported by Filebeat differ somewhat from the patterns supported by Logstash. See Regular expression support for a list of supported regexp patterns. Depending on how you configure other multiline options, lines that match the specified regular expression are considered either continuations of a previous line or the start of a new multiline event. You can set the negate option to negate the pattern.
# 正则表达式参考:https://www.elastic.co/guide/en/beats/filebeat/6.8/regexp-support.html
  #The example pattern matches all lines starting with [ 。  例子multiline.pattern: '^java.|^[[:space:]]+(at|\.{3})|^Caused by:’,表示将以java.或空格或Caused by开头的所有行合并到上一行。
  #multiline.pattern: ^\[
  # Defines if the pattern set under pattern should be negated or not. Default is false.
 # 是否需要对pattern条件转置使用,不翻转设为true,反转设置为false。
  #multiline.negate: false
  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after
 #例子:negate:false,match:after =>与模式匹配的连续行将附加到不匹配的上一行;negate:false,match:before =>与模式匹配的连续行将预先添加到不匹配的下一行;negate:true,match:after =>与模式不匹配的连续行将附加到匹配的上一行;negate:true,match:before =>与模式不匹配的连续行将预先添加到匹配的下一行。
  # The maximum number of lines that are combined to one event.
  # In case there are more the max_lines the additional lines are discarded.可以合并为一个事件的最大行数。如果多行消息包含多个消息max_lines,则丢弃任何其他行。默认值为500。
  # Default is 500
  #multiline.max_lines: 500
  # After the defined timeout, an multiline event is sent even if no new pattern was found to start a new event
 # 到了timeout之后,即使没有匹配一个新的pattern(发生一个新的事件),也把已经匹配的日志事件发送出去.
  # Default is 5s.
  #multiline.timeout: 5s
  # Setting tail_files to true means filebeat starts reading new files at the end
  # instead of the beginning. If this is used in combination with log rotation
  # this can mean that the first entries of a new file are skipped.
 # 如果此选项设置为true,则Filebeat将开始在每个文件的末尾而不是开头读取新文件。当此选项与log rotation结合使用时,可能会跳过新文件中的第一个日志。默认设置为false。此选项适用于Filebeat尚未处理的文件。如果先前运行了Filebeat并且文件的状态已经保留,tail_files则不会应用,会从之前的offset开始获取。如果要应用于tail_files所有文件,您必须停止Filebeat并删除注册表文件。请注意,这样做会删除以前的所有状态。注意:第一次在一组日志文件上运行Filebeat时,可以使用此设置来避免获取旧日志行。但是在第一次运行后,建议您禁用此选项,否则您可能会在文件轮换(log rotation)期间丢失日志。
  # tail_files: false
  # The Ingest Node pipeline ID associated with this input. If this is set, it
  # overwrites the pipeline option from the Elasticsearch output.
  #pipeline:
  # If symlinks is enabled, symlinks are opened and harvested. The harvester is opening the
  # original for harvesting but will report the symlink name as source.
 # 允许Filebeat除常规文件外,可以采集symlinks。采集symlinks时,即使报告了symlinks的路径,Filebeat也会打开并读取原始文件。
# 注意symlinks和原始路径仅配置一个。如果在同一个input里同时配置了symlinks和原始路径,Filebeat会发现问题并只会处理它们中的第一个文件。但是如果两个不同的input里分别配置了symlinks和原始路径,filebeat会重复发送多个数据,并且两个input会彼此覆盖彼此记录的state。The symlinks option can be useful if symlinks to the log files have additional metadata in the file name, and you want to process the metadata in Logstash. This is, for example, the case for Kubernetes log files.
# 因为这个选项可能造成数据丢失,因此默认false。
  #symlinks: false
  # Backoff values define how aggressively filebeat crawls new files for updates
  # The default values can be used in most cases. Backoff defines how long it is waited
  # to check a file again after EOF is reached. Default is 1s which means the file
  # is checked every second if new lines were added. This leads to a near real time crawling.
  # Every time a new line appears, backoff is reset to the initial value.
 # Filebeat检测到某个文件到了EOF(文件结尾)之后,每次等待多久再去检测文件是否有更新,默认为1s,这意味着如果添加了新行,则每秒检查一次文件。这可以实现近乎实时的采集。每次在文件中出现新行时,该backoff值将重置为初始值。默认值为1秒。
  #backoff: 1s
  # Max backoff defines what the maximum backoff time is. After having backed off multiple times
  # from checking the files, the waiting time will never exceed max_backoff independent of the
  # backoff factor. Having it set to 10s means in the worst case a new line can be added to a log
  # file after having backed off multiple times, it takes a maximum of 10s to read the new line
 # Filebeat检测到某个文件到了EOF之后,等待检测文件更新的最大时间,默认是10秒。如果Filebeat多次backed off,则最坏的情况下可以将新行添加到日志文件中。
  #max_backoff: 10s
  # The backoff factor defines how fast the algorithm backs off. The bigger the backoff factor,
  # the faster the max_backoff value is reached. If this value is set to 1, no backoff will happen.
  # The backoff value will be multiplied each time with the backoff_factor until max_backoff is reached
 # 指定backoff尝试等待时间几次,默认是2。根据默认配置2,每隔1s检测一下文件变化,如果连续检测两次之后文件还没有变化,下一次检测间隔时间变为10s
  #backoff_factor: 2
  # Max number of harvesters that are started in parallel.
  # Default is 0 which means unlimited
 # 限制并行启动的harvester数量,直接影响文件打开数。默认无限制
  #harvester_limit: 0
  ### Harvester closing options
# 总结:这些close_*配置项用于在一个确定的条件或者时间点之后关闭harvester。关闭harvester意味着关闭文件处理器。如果在harvester关闭以后文件被更新,那么在scan_frequency结束后改文件将再次被拾起。然而,当harvester关闭的时候如果文件被删除或者被移动,那么Filebeat将不会被再次拾起,并且这个harvester还没有读取的数据将会丢失。
  # Close inactive closes the file handler after the predefined period.
  # The period starts when the last line of the file was, not the file ModTime.
  # Time strings like 2h (2 hours), 5m (5 minutes) can be used.
 # 启用此选项后,如果文件尚未在指定的持续时间内采集,则Filebeat将关闭文件句柄。最后一条日志读取的时间点来定义过期的起始时间,而不是基于文件的修改时间。
# 如果关闭的文件发生变化,会再启动一个新的harverster,并且latest changes会在scan_frequency时间后采集。
# 建议:close_inactive要大于此input采集的全部日志里的最小日志更新频率,例如,如果您的日志文件每隔几秒更新一次,则可以安全地设置close_inactive为1m。如果存在更新速率的差别过大的日志文件,则可以配置多个input。因为设置close_inactive为较低的值意味着文件句柄会更快关闭,这样会导致harvester关闭了新日志不能及时的采集到。
# 关闭文件的时间戳不依赖于文件的修改时间。相反,Filebeat使用内部时间戳来反映上次收集文件的时间。例如,如果close_inactive设置为5分钟,则在收集器读取文件的最后一行后,将开始倒计时5分钟。
  #close_inactive: 5m
  # Close renamed closes a file handler when the file is renamed or rotated.
  # Note: Potential data loss. Make sure to read and understand the docs for this option.
 # 当选项启动,如果文件被重命名和移动,filebeat关闭文件句柄。开启可能会造成数据丢失,谨慎使用。
# This happens, for example, when rotating files.  By default, the harvester stays open and keeps reading the file because the file handler does not depend on the file name. If the close_renamed option is enabled and the file is renamed or moved in such a way that it’s no longer matched by the file patterns specified for the , the file will not be picked up again. Filebeat will not finish reading the file.
  #close_renamed: false
  # When enabling this option, a file handler is closed immediately in case a file can't be found
  # any more. In case the file shows up again later, harvesting will continue at the last known position
  # after scan_frequency.
 # 当选项启动,文件被删除时,filebeat关闭harvester。通常情况下,只有文件在指定的持续时间内处于非活动状态后(close_inactive定义的)才能被删除。但是,如果文件被提前删除而您未启用close_removed,则Filebeat会保持文件打开以确保harvester已完成。如果此设置导致文件由于过早从磁盘中删除而未完全读取,请禁用此选项(If this setting results in files that are not completely read because they are removed from disk too early, disable this option)。默认情况下启用此选项。如果禁用此选项,则还必须禁用clean_removed。
  #close_removed: true
  # Closes the file handler as soon as the harvesters reaches the end of the file.
  # By default this option is disabled.
  # Note: Potential data loss. Make sure to read and understand the docs for this option.
 # 启用此选项后,Filebeat会在文件结束时立即关闭文件。适合只写一次日志的文件。
  #close_eof: false
  ### State options
 # 这些clean_*选项用于清除注册表文件中的状态条目。这些设置有助于减小注册表文件的大小,并可以防止潜在的inode重用问题。
  # Files for the modification data is older then clean_inactive the state from the registry is removed
  # By default this is disabled.
 #用于从注册表文件中删除先前采集的文件的状态.启用此选项后,Filebeat会在指定的不活动时间段过后删除文件的状态。如果文件已被Filebeat忽略(file is older than ignore_older),则只能删除状态 。
# 设置必须大于ignore_older+scan_frequency,以确保在文件仍在收集时不会删除任何状态.否则,该设置可能导致Filebeat不断重新从头发送完整内容,因为 clean_inactive删除了Filebeat仍检测到的文件的状态。如果文件已更新或再次出现,因为state已经删除则会从头开始读取文件。
# 配置选项有助于减小注册表文件的大小,特别是如果每天都生成大量的新文件。此配置选项也可用于防止在Linux上重用inode问题。
  #clean_inactive: 0
  # Removes the state for file which cannot be found on disk anymore immediately
# 启动选项后,如果文件在磁盘上找不到,则将从注册表中立即清除此文件的state。默认情况下启用此选项。
# 注意此配置在harvester完成后重命名的文件也将被删除state。If a shared drive disappears for a short period and appears again, all files will be read again from the beginning because the states were removed from the registry file. In such cases, we recommend that you disable the clean_removed option.
# 注意:如果关闭close removed 必须关闭clean removed。
  #clean_removed: true
  # Close timeout closes the harvester after the predefined time.
  # This is independent if the harvester did finish reading the file or not.
  # By default this option is disabled.
  # Note: Potential data loss. Make sure to read and understand the docs for this option.
 # 启用此选项后,Filebeat会为每个harvester提供预定义的生命周期。 无不管这个文件是否被读取读取到什么程度,达到设定时间后,都将被关闭。
#虽然close_timeout在预定义超时后将关闭文件,但如果文件仍在更新,Filebeat将根据定义scan_frequency时间之后再次启动新的harvester。并且close_timeout重新开始计时。
#在输出阻塞且Filebeat采集的文件已经从从磁盘中删除但是仍占用资源的情况,设置close_timeout为5m就确保定期关闭文件,以便操作系统释放它们。
#close_timeout 不能等于ignore_older,这样若harvester关闭时文件更新,会导致数据丢失,并且不会发送完整的文件。
# When you use close_timeout for logs that contain multiline events, the harvester might stop in the middle of a multiline event, which means that only parts of the event will be sent. If the harvester is started again and the file still exists, only the second part of the event will be sent.
  #close_timeout: 0
  # Defines if inputs is enabled
#配置此input的启动或关闭
  #enabled: true
#----------------------------- Stdin input -------------------------------
# Configuration to use stdin input
#- type: stdin
#------------------------- Redis slowlog input ---------------------------
# Experimental: Config options for the redis slow log input
#- type: redis
  #enabled: false
  # List of hosts to pool to retrieve the slow log information.
  #hosts: ["localhost:6379"]
  # How often the input checks for redis slow log.
  #scan_frequency: 10s
  # Timeout after which time the input should return an error
  #timeout: 1s
  # Network type to be used for redis connection. Default: tcp
  #network: tcp
  # Max number of concurrent connections. Default: 10
  #maxconn: 10
  # Redis AUTH password. Empty by default.
  #password: foobared
#------------------------------ Udp input --------------------------------
# Experimental: Config options for the udp input
#- type: udp
  #enabled: false
  # Maximum size of the message received over UDP
  #max_message_size: 10KiB
#------------------------------ TCP input --------------------------------
# Experimental: Config options for the TCP input
#- type: tcp
  #enabled: false
  # The host and port to receive the new event
  #host: "localhost:9000"
  # Character used to split new message
  #line_delimiter: "\n"
  # Maximum size in bytes of the message received over TCP
  #max_message_size: 20MiB
  # The number of seconds of inactivity before a remote connection is closed.
  #timeout: 300s
  # Use SSL settings for TCP.
  #ssl.enabled: true
  # List of supported/valid TLS versions. By default all TLS versions 1.0 up to
  # 1.2 are enabled.
  #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
  # SSL configuration. By default is off.
  # List of root certificates for client verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  # Certificate for SSL server authentication.
  #ssl.certificate: "/etc/pki/client/cert.pem"
  # Server Certificate Key,
  #ssl.key: "/etc/pki/client/cert.key"
  # Optional passphrase for decrypting the Certificate Key.
  #ssl.key_passphrase: ''
  # Configure cipher suites to be used for SSL connections.
  #ssl.cipher_suites: []
  # Configure curve types for ECDHE based cipher suites.
  #ssl.curve_types: []
  # Configure what types of client authentication are supported. Valid options
  # are `none`, `optional`, and `required`. When `certificate_authorities` is set it will
  # default to `required` otherwise it will be set to `none`.
  #ssl.client_authentication: "required"
#------------------------------ Syslog input --------------------------------
# Experimental: Config options for the Syslog input
# Accept RFC3164 formatted syslog event via UDP.
#- type: syslog
  #enabled: false
  #protocol.udp:
    # The host and port to receive the new event
    #host: "localhost:9000"
    # Maximum size of the message received over UDP
    #max_message_size: 10KiB
# Accept RFC3164 formatted syslog event via TCP.
#- type: syslog
  #enabled: false
  #protocol.tcp:
    # The host and port to receive the new event
    #host: "localhost:9000"
    # Character used to split new message
    #line_delimiter: "\n"
    # Maximum size in bytes of the message received over TCP
    #max_message_size: 20MiB
    # The number of seconds of inactivity before a remote connection is closed.
    #timeout: 300s
    # Use SSL settings for TCP.
    #ssl.enabled: true
    # List of supported/valid TLS versions. By default all TLS versions 1.0 up to
    # 1.2 are enabled.
    #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
    # SSL configuration. By default is off.
    # List of root certificates for client verifications
    #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
    # Certificate for SSL server authentication.
    #ssl.certificate: "/etc/pki/client/cert.pem"
    # Server Certificate Key,
    #ssl.key: "/etc/pki/client/cert.key"
    # Optional passphrase for decrypting the Certificate Key.
    #ssl.key_passphrase: ''
    # Configure cipher suites to be used for SSL connections.
    #ssl.cipher_suites: []
    # Configure curve types for ECDHE based cipher suites.
    #ssl.curve_types: []
    # Configure what types of client authentication are supported. Valid options
    # are `none`, `optional`, and `required`. When `certificate_authorities` is set it will
    # default to `required` otherwise it will be set to `none`.
    #ssl.client_authentication: "required"
#------------------------------ Docker input --------------------------------
# Experimental: Docker input reads and parses `json-file` logs from Docker
#- type: docker
  #enabled: false
  # Combine partial lines flagged by `json-file` format
  #combine_partials: true
  # Use this to read from all containers, replace * with a container id to read from one:
  #containers:
  #  stream: all # can be all, stdout or stderr
  #  ids:
  #    - '*'
#------------------------------ NetFlow input --------------------------------
# Experimental: Config options for the Netflow/IPFIX collector over UDP input
#- type: netflow
  #enabled: false
  # Address where the NetFlow Collector will bind
  #host: ":2055"
  # Maximum size of the message received over UDP
  #max_message_size: 10KiB
  # List of enabled protocols.
  # Valid values are 'v1', 'v5', 'v6', 'v7', 'v8', 'v9' and 'ipfix'
  #protocols: [ v5, v9, ipfix ]
  # Expiration timeout
  # This is the time before an idle session or unused template is expired.
  # Only applicable to v9 and ipfix protocols. A value of zero disables expiration.
  #expiration_timeout: 30m
  # Queue size limits the number of netflow packets that are queued awaiting
  # processing.
  #queue_size: 8192
#========================== Filebeat autodiscover ==============================
#自动发现https://www.elastic.co/guide/en/beats/filebeat/6.8/configuration-autodiscover.html
# Autodiscover allows you to detect changes in the system and spawn new modules
# or inputs as they happen.
#filebeat.autodiscover:
  # List of enabled autodiscover providers
#  providers:
#    - type: docker
#      templates:
#        - condition:
#            equals.docker.container.image: busybox
#          config:
#            - type: log
#              paths:
#                - /var/lib/docker/containers/${data.docker.container.id}/*.log
#========================= Filebeat global options ============================
# Name of the registry file. If a relative path is used, it is considered relative to the
# data path.注册表文件路径。
#filebeat.registry_file: ${path.data}/registry
# The permissions mask to apply on registry file. The default value is 0600.
# Must be a valid Unix-style file permissions mask expressed in octal notation.
# This option is not supported on Windows.文件权限
#filebeat.registry_file_permissions: 0600
# The timeout value that controls when registry entries are written to disk (flushed). 将注册表项刷入磁盘超时时间。
# When an unwritten update exceeds this value, it triggers a write to disk.
# When registry_flush is set to 0s, the registry is written to disk after
# each batch of events has been published successfully. The default value is 0s.
#filebeat.registry_flush: 0s
#注意:Filebeat正常关闭时,注册表始终更新。异常关闭后,如果registry_flush值> 0,则注册表将不是最新的。Filebeat将再次发送已发布的事件(取决于上次更新的注册表文件中的值)。The registry is always updated when Filebeat shuts down normally. After an abnormal shutdown, the registry will not be up-to-date if the registry_flush value is >0s. Filebeat will send published events again (depending on values in the last updated registry file).
#过滤掉大量日志会导致许多注册表更新,从而减慢处理速度。将registry_flush设置为> 0的值可减少写入操作,从而帮助Filebeat处理更多事件。Filtering out a huge number of logs can cause many registry updates, slowing down processing. Setting registry_flush to a value >0s reduces write operations, helping Filebeat process more events.

# By default Ingest pipelines are not updated if a pipeline with the same ID
# already exists. If this option is enabled Filebeat overwrites pipelines
# everytime a new Elasticsearch connection is established.
#filebeat.overwrite_pipelines: false
# How long filebeat waits on shutdown for the publisher to finish.
# Default is 0, not waiting. This means that any events sent to the output, but not acknowledged before Filebeat shuts down, are sent again when you restart Filebeat
#Filebeat等待发布者在Filebeat关闭之前完成发送事件的时间。
#默认情况下,此选项被禁用,Filebeat不会等待发布者在关闭之前完成发送事件。这意味着在重新启动Filebeat时,会再次发送发送到输出但在Filebeat关闭之前未确认的任何事件。您可以配置该shutdown_timeout选项以指定Filebeat等待发布者在关闭之前完成发送事件的最长时间。如果在shutdown_timeout达到之前确认了所有事件,则Filebeat将关闭。
#filebeat.shutdown_timeout: 0
# Enable filebeat config reloading 
# 此功能适用于作为外部配置文件加载的输入和模块配置 。无法使用此功能重新加载主filebeat.yml配置文件。
#Filebeat可以为输入和模块加载外部配置文件,允许您将配置分成多个较小的配置文件。https://www.elastic.co/guide/en/beats/filebeat/6.8/_live_reloading.html
#https://www.elastic.co/guide/en/beats/filebeat/6.8/filebeat-configuration-reloading.html
#filebeat.config:
  #inputs:
    #enabled: false 
   #用于定义要检查的配置文件。
    #path: inputs.d/*.yml
   #设置true为时,启用动态配置重新加载。
    #reload.enabled: true
   #指定检查文件更改的频率。不要将其设置period为小于1,因为文件的修改时间通常以秒为单位存储。将period小于1 设置为将导致不必要的开销。
    #reload.period: 10s
  #modules:
    #enabled: false
    #path: modules.d/*.yml
    #reload.enabled: true
    #reload.period: 10s
-=-=-=外部配置文件例子,以-type开头
#- type: log
#  paths:
#    - /var/log/mysql.log
#  scan_frequency: 10s
#- type: log
#  paths:
#    - /var/log/apache.log
#  scan_frequency: 5s
-=-=-=
#================================ General=====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
# If this options is not defined, the hostname is used.
# 设置名字,如果配置为空,则用该服务器的主机名
#name:
# The tags of the shipper are included in their own field with each
# transaction published. Tags make it easy to group servers by different logical properties.
# A list of tags that Filebeat includes in the tags field of each published event.添加标签列表,可用过过滤。
# beat通用参数:A list of tags that the Beat includes in the tags field of each published transaction. Tags make it easy to group servers by different logical properties. For example, if you have a cluster of web servers, you can add the "webservers" tag to the Beat on each server, and then use filters and queries in the Kibana web interface to get visualisations for the whole group of servers.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output. Fields can be scalar values, arrays, dictionaries, or any nested
# combination of these.
# 可选字段,选择额外的字段进行输出.可以添加可用于过滤日志数据的字段。字段可以是标量值,数组,字典或这些的任何嵌套组合.默认在sub-dictionary 位置.要将自定义字段存储为顶级字段,请将该fields_under_root选项设置为true
#fields:
#  env: staging
# If this option is set to true, the custom fields are stored as top-level
# fields in the output document instead of being grouped under a fields
# sub-dictionary. Default is false.
# 如果此选项设置为true,则自定义字段将存储为输出文档中的顶级字段,而不是在fields子字典下分组。如果自定义字段名称与Filebeat添加的其他字段名称冲突,则自定义字段将覆盖其他字段。
#fields_under_root: false
#queue 内部队列缓冲配置
#https://www.elastic.co/guide/en/beats/filebeat/6.8/configuring-internal-queue.html
# Internal queue configuration for buffering events to be published.Filebeat使用内部队列在发布events之前缓存它们。队列负责缓存events并将events组合成可由输出使用的batchs。outputs将使用批量操作在一个事务中发送一批事件(The outputs will use bulk operations to send a batch of events in one transaction)。可以在filebeat.yml文件中配置队列,只能配置一种队列类型。
#queue:
  # Queue type by name (default 'mem’)
 # 内存队列:内存队列将所有事件保存在内存中。
  # The memory queue will present all available events (up to the outputs
  # bulk_max_size) to the output, the moment the output is ready to server
  # another batch of events.
  #mem:
    # Max number of events the queue can buffer.队列可以存储的事件数。默认值为4096个事件。
    #events: 4096
#总结:如果flush时间间隔和数量限额未配置,则会立即发送。If no flush interval and no number of events to flush is configured, all events published to this queue will be directly consumed by the outputs.默认flush.min.events为2048 ,flush.timeout为1s. 
Output的bulk_max_size设置限制一次处理的事件数。内存队列等待output确认或丢弃事件。如果队列已满,则不能将新事件插入内存队列。只有在来自output的信号之后,队列才会释放空间以接受更多事件。
    # Hints the minimum number of events stored in the queue,
    # before providing a batch of events to the outputs.发出所需的最少事件数
    # The default value is set to 2048.
    # A value of 0 ensures events are immediately available
    # to be sent to the outputs.如果此值设置为0,则输出可以立刻发布事件。
    #flush.min_events: 2048
    # Maximum duration after which events are available to the outputs,
    # if the number of events stored in the queue is < min_flush_events.
   # 达到flush.min_events的最长等待时间, 如果设置为0,则event将立即发送。
    #flush.timeout: 1s
#-=-=-=-
#This sample configuration forwards events to the output if 512 events are available or the oldest available event has been waiting for 5s in the queue: 
#queue.mem:
#  events: 4096
#  flush.min_events: 512
#  flush.timeout: 5s
#-=-=-=-
 # 磁盘环形缓冲区queue
  # The spool queue will store events in a local spool file, before
  # forwarding the events to the outputs.
  #
  # Beta: spooling to disk is currently a beta feature. Use with care.测试
  #
  # The spool file is a circular buffer, which blocks once the file/buffer is full.
  # Events are put into a write buffer and flushed once the write buffer
  # is full or the flush_timeout is triggered.
  # Once ACKed by the output, events are removed immediately from the queue,
  # making space for new events to be persisted.
  #spool:
    # The file namespace configures the file path and the file creation settings.
    # Once the file exists, the `size`, `page_size` and `prealloc` settings
    # will have no more effect.
    #file:
      # Location of spool file. The default value is ${path.data}/spool.dat.
      #path: "${path.data}/spool.dat"
      # Configure file permissions if file is created. The default value is 0600.
      #permissions: 0600
      # File size hint. The spool blocks, once this limit is reached. The default value is 100 MiB.
      #size: 100MiB
      # The files page size. A file is split into multiple pages of the same size. The default value is 4KiB. On disk, the spool divides a file into pages. 
      #page_size: 4KiB
      # If prealloc is set, the required space for the file is reserved using
      # truncate. The default value is true.
      #prealloc: true
    # Spool writer settings
    # Events are serialized into a write buffer. The write buffer is flushed if:
    # - The buffer limit has been reached.
    # - The configured limit of buffered events is reached.
    # - The flush timeout is triggered.
    #write:
      # Sets the write buffer size.
      #buffer_size: 1MiB
      # Maximum duration after which events are flushed if the write buffer
      # is not full yet. The default value is 1s.
      #flush.timeout: 1s
      # Number of maximum buffered events. The write buffer is flushed once the
      # limit is reached.
      #flush.events: 16384
      # Configure the on-disk event encoding. The encoding can be changed
      # between restarts.
      # Valid encodings are: json, ubjson, and cbor.
      #codec: cbor
    #read:
      # Reader flush timeout, waiting for more events to become available, so
      # to fill a complete batch as required by the outputs.
      # If flush_timeout is 0, all available events are forwarded to the
      # outputs immediately.
      # The default value is 0s.
      #flush.timeout: 0s
# Sets the maximum number of CPUs that can be executing simultaneously. The
# default is the number of logical CPUs available in the system.
# 设置可以同时执行的最大CPU数。默认值是系统中可用的逻辑CPU数。
#max_procs:
#================================ Processors ===================================
https://www.elastic.co/guide/en/beats/filebeat/6.8/filtering-and-enhancing-data.html
# Processors are used to reduce the number of fields in the exported event or to
# enhance the event with external metadata. This section defines a list of
# processors that are applied one by one and the first one receives the initial
# event:
#
#   event -> filter1 -> event1 -> filter2 ->event2 ...
#
# The supported processors are drop_fields, drop_event, include_fields,
# decode_json_fields, and add_cloud_metadata.
#
# For example, you can use the following processors to keep the fields that
# contain CPU load percentages, but remove the fields that contain CPU ticks
# values:
#
#processors:
#- include_fields:
#    fields: ["cpu"]
#- drop_fields:
#    fields: ["cpu.user", "cpu.system"]
#
# The following example drops the events that have the HTTP response code 200:
#
#processors:
#- drop_event:
#    when:
#       equals:
#           http.code: 200
#
# The following example renames the field a to b:
#
#processors:
#- rename:
#    fields:
#       - from: "a"
#         to: "b"
#
# The following example tokenizes the string into fields:
#
#processors:
#- dissect:
#    tokenizer: "%{key1} - %{key2}"
#    field: "message"
#    target_prefix: "dissect"
#
# The following example enriches each event with metadata from the cloud
# provider about the host machine. It works on EC2, GCE, DigitalOcean,
# Tencent Cloud, and Alibaba Cloud.
#
#processors:
#- add_cloud_metadata: ~
#
# The following example enriches each event with the machine's local time zone
# offset from UTC.
#
#processors:
#- add_locale:
#    format: offset
#
# The following example enriches each event with docker metadata, it matches
# given fields to an existing container id and adds info from that container:
#
#processors:
#- add_docker_metadata:
#    host: "unix:///var/run/docker.sock"
#    match_fields: ["system.process.cgroup.id"]
#    match_pids: ["process.pid", "process.ppid"]
#    match_source: true
#    match_source_index: 4
#    match_short_id: false
#    cleanup_timeout: 60
#    labels.dedot: false
#    # To connect to Docker over TLS you must specify a client and CA certificate.
#    #ssl:
#    #  certificate_authority: "/etc/pki/root/ca.pem"
#    #  certificate:           "/etc/pki/client/cert.pem"
#    #  key:                   "/etc/pki/client/cert.key"
#
# The following example enriches each event with docker metadata, it matches
# container id from log path available in `source` field (by default it expects
# it to be /var/lib/docker/containers/*/*.log).
#
#processors:
#- add_docker_metadata: ~
#
# The following example enriches each event with host metadata.
#
#processors:
#- add_host_metadata:
#   netinfo.enabled: false
#
# The following example enriches each event with process metadata using
# process IDs included in the event.
#
#processors:
#- add_process_metadata:
#    match_pids: ["system.process.ppid"]
#    target: system.process.parent
#
# The following example decodes fields containing JSON strings
# and replaces the strings with valid JSON objects.
#
#processors:
#- decode_json_fields:
#    fields: ["field1", "field2", ...]
#    process_array: false
#    max_depth: 1
#    target: ""
#    overwrite_keys: false
#============================= Elastic Cloud ==================================
# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
#================================ Outputs ======================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ———————————————
#https://www.elastic.co/guide/en/beats/filebeat/6.8/elasticsearch-output.html
#When you specify Elasticsearch for the output, Filebeat sends the transactions directly to Elasticsearch by using the Elasticsearch HTTP API.
output.elasticsearch:
  # Boolean flag to enable or disable the output module.启用或禁用该输出。默认true。
  #enabled: true
  # Array of hosts to connect to.
  # Scheme and port can be left out and will be set to the default (http and 9200)
  # In case you specify and additional path, the scheme is required: http://localhost:9200/path
# 负载均衡:Elasticsearch节点列表。事件以循环顺序发送到这些节点。如果一个节点变得不可访问,那么自动发送到下一个节点。每个节点可以是URL形式,也可以是IP:PORT形式。如果端口没有指定,用9200。
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
  hosts: ["localhost:9200"]
  # Enabled ilm (beta) to use index lifecycle management instead daily indices.
  #ilm.enabled: false
  #ilm.rollover_alias: "filebeat"
  #ilm.pattern: "{now/d}-000001"
  # Set gzip compression level.
 #gzip压缩级别。将此值设置为0将禁用压缩。压缩级别必须在1(最佳速度)到9(最佳压缩)的范围内。增加压缩级别将减少网络使用,但会增加CPU使用率。默认0
  #compression_level: 0
  # Configure escaping HTML symbols in strings.配置字符串中的HTML转义。设置false为禁用转义。默认true。
  #escape_html: true
  # Optional protocol and basic auth credentials.认证的es
#The name of the protocol Elasticsearch is reachable on. The options are: http or https. The default is http. However, if you specify a URL for hosts, the value of protocol is overridden by whatever scheme you specify in the URL. 
#protocol: "https"
  #username: "elastic"
  #password: "changeme"
  # Dictionary of HTTP parameters to pass within the URL with index operations.
  #parameters:
    #param1: value1
    #param2: value2
  # Number of workers per Elasticsearch host.
 #The number of workers per configured host publishing events to Elasticsearch. This is best used with load balancing mode enabled. Example: If you have 2 hosts and 3 workers, in total 6 workers are started (3 for each host)The default value is 1.每个配置主机向Elasticsearch发布event的工作器数。这最适用于启用负载平衡模式。示例:如果您有2个host和3个worker,则总共启动6个工作程序(每个主机3个)。
  #worker: 1
  # Optional index name. 要将事件写入的索引名称。The default is "filebeat" plus date
  # and generates [filebeat-]YYYY.MM.DD keys.
  # In case you modify this pattern you must update setup.template.name and setup.template.pattern accordingly.如果更改此设置,还需要配置setup.template.name和setup.template.pattern选项.如果您使用的是预先构建的Kibana dashboard,则还需要设置该setup.dashboards.index选项.
  #index: "filebeat-%{[beat.version]}-%{+yyyy.MM.dd}”
 #动态设置例子:You can set the index dynamically by using a format string to access any event field. For example, this configuration uses a custom field, fields.log_type, to set the index: index: "%{[fields.log_type]}-%{[beat.version]}-%{+yyyy.MM.dd}”.With this configuration, all events with log_type: normal are sent to an index named normal-6.8.1-2019-06-26, and all events with log_type: critical are sent to an index named critical-6.8.1-2019-06-26.
#recommend including beat.version in the name to avoid mapping issues when you upgrade.
  # Optional ingest node pipeline. By default no pipeline will be used.
  #pipeline: ""
  # Optional HTTP path
# An HTTP path prefix that is prepended to the HTTP API calls. This is useful for the cases where Elasticsearch listens behind an HTTP reverse proxy that exports the API under a custom prefix.
  #path: "/elasticsearch"
  # Custom HTTP headers to add to each request.Custom HTTP headers to add to each request created by the Elasticsearch output.
  #headers:
  #  X-My-Header: Contents of the header
  # Proxy server URL:The URL of the proxy to use when connecting to the Elasticsearch servers. The value may be either a complete URL or a "host[:port]", in which case the "http" scheme is assumed. If a value is not specified through the configuration file then proxy environment variables are used. See the Go documentation for more information about the environment variables.
  #proxy_url: http://proxy:3128
  # The number of times a particular Elasticsearch index operation is attempted. If
  # the indexing operation doesn't succeed after this many retries, the events are
  # dropped. The default is 3.
  #max_retries: 3
  # The maximum number of events to bulk in a single Elasticsearch bulk API index request.
  # The default is 50.
  #bulk_max_size: 50
  # The number of seconds to wait before trying to reconnect to Elasticsearch
  # after a network error. After waiting backoff.init seconds, the Beat
  # tries to reconnect. If the attempt fails, the backoff timer is increased
  # exponentially up to backoff.max. After a successful connection, the backoff
  # timer is reset. The default is 1s.
  #backoff.init: 1s
  # The maximum number of seconds to wait before attempting to connect to
  # Elasticsearch after a network error. The default is 60s.
  #backoff.max: 60s
  # Configure HTTP request timeout before failing a request to Elasticsearch.
  #timeout: 90
  # Use SSL settings for HTTPS.
  #ssl.enabled: true
  # Configure SSL verification mode. If `none` is configured, all server hosts
  # and certificates will be accepted. In this mode, SSL-based connections are
  # susceptible to man-in-the-middle attacks. Use only for testing. Default is
  # `full`.
  #ssl.verification_mode: full
  # List of supported/valid TLS versions. By default all TLS versions from 1.0 up to
  # 1.2 are enabled.
  #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"
  # Client certificate key
  #ssl.key: "/etc/pki/client/cert.key"
  # Optional passphrase for decrypting the certificate key.
  #ssl.key_passphrase: ''
  # Configure cipher suites to be used for SSL connections
  #ssl.cipher_suites: []
  # Configure curve types for ECDHE-based cipher suites
  #ssl.curve_types: []
  # Configure what types of renegotiation are supported. Valid options are
  # never, once, and freely. Default is never.
  #ssl.renegotiation: never

#----------------------------- Logstash output ---------------------------------
#output.logstash:
  # Boolean flag to enable or disable the output module.
  #enabled: true
  # The Logstash hosts
  #hosts: ["localhost:5044"]
  # Number of workers per Logstash host.
  #worker: 1
  # Set gzip compression level.
  #compression_level: 3
  # Configure escaping HTML symbols in strings.
  #escape_html: true
  # Optional maximum time to live for a connection to Logstash, after which the
  # connection will be re-established.  A value of `0s` (the default) will
  # disable this feature.
  #
  # Not yet supported for async connections (i.e. with the "pipelining" option set)
  #ttl: 30s
  # Optionally load-balance events between Logstash hosts. Default is false.
  #loadbalance: false
  # Number of batches to be sent asynchronously to Logstash while processing
  # new batches.
  #pipelining: 2
  # If enabled only a subset of events in a batch of events is transferred per
  # transaction.  The number of events to be sent increases up to `bulk_max_size`
  # if no error is encountered.
  #slow_start: false
  # The number of seconds to wait before trying to reconnect to Logstash
  # after a network error. After waiting backoff.init seconds, the Beat
  # tries to reconnect. If the attempt fails, the backoff timer is increased
  # exponentially up to backoff.max. After a successful connection, the backoff
  # timer is reset. The default is 1s.
  #backoff.init: 1s
  # The maximum number of seconds to wait before attempting to connect to
  # Logstash after a network error. The default is 60s.
  #backoff.max: 60s
  # Optional index name. The default index name is set to filebeat
  # in all lowercase.
  #index: 'filebeat'
  # SOCKS5 proxy server URL
  #proxy_url: socks5://user:password@socks5-server:2233
  # Resolve names locally when using a proxy server. Defaults to false.
  #proxy_use_local_resolver: false
  # Enable SSL support. SSL is automatically enabled if any SSL setting is set.
  #ssl.enabled: true
  # Configure SSL verification mode. If `none` is configured, all server hosts
  # and certificates will be accepted. In this mode, SSL based connections are
  # susceptible to man-in-the-middle attacks. Use only for testing. Default is
  # `full`.
  #ssl.verification_mode: full
  # List of supported/valid TLS versions. By default all TLS versions from 1.0 up to
  # 1.2 are enabled.
  #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
  # Optional SSL configuration options. SSL is off by default.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"
  # Client certificate key
  #ssl.key: "/etc/pki/client/cert.key"
  # Optional passphrase for decrypting the Certificate Key.
  #ssl.key_passphrase: ''
  # Configure cipher suites to be used for SSL connections
  #ssl.cipher_suites: []
  # Configure curve types for ECDHE-based cipher suites
  #ssl.curve_types: []
  # Configure what types of renegotiation are supported. Valid options are
  # never, once, and freely. Default is never.
  #ssl.renegotiation: never
  # The number of times to retry publishing an event after a publishing failure.
  # After the specified number of retries, the events are typically dropped.
  # Some Beats, such as Filebeat and Winlogbeat, ignore the max_retries setting
  # and retry until all events are published.  Set max_retries to a value less
  # than 0 to retry until all events are published. The default is 3.
  #max_retries: 3
  # The maximum number of events to bulk in a single Logstash request. The
  # default is 2048.
  #bulk_max_size: 2048
  # The number of seconds to wait for responses from the Logstash server before
  # timing out. The default is 30s.
  #timeout: 30s
#------------------------------- Kafka output ----------------------------------
#output.kafka:
  # Boolean flag to enable or disable the output module.
  #enabled: true
  # The list of Kafka broker addresses from which to fetch the cluster metadata.
  # The cluster metadata contain the actual Kafka brokers events are published
  # to.
  #hosts: ["localhost:9092"]
  # The Kafka topic used for produced events. The setting can be a format string
  # using any event field. To set the topic from document type use `%{[type]}`.
  #topic: beats
  # The Kafka event key setting. Use format string to create a unique event key.
  # By default no event key will be generated.
  #key: ''
  # The Kafka event partitioning strategy. Default hashing strategy is `hash`
  # using the `output.kafka.key` setting or randomly distributes events if
  # `output.kafka.key` is not configured.
  #partition.hash:
    # If enabled, events will only be published to partitions with reachable
    # leaders. Default is false.
    #reachable_only: false
    # Configure alternative event field names used to compute the hash value.
    # If empty `output.kafka.key` setting will be used.
    # Default value is empty list.
    #hash: []
  # Authentication details. Password is required if username is set.
  #username: ''
  #password: ''
  # Kafka version filebeat is assumed to run against. Defaults to the "1.0.0".
  #version: '1.0.0'
  # Configure JSON encoding
  #codec.json:
    # Pretty-print JSON event
    #pretty: false
    # Configure escaping HTML symbols in strings.
    #escape_html: true
  # Metadata update configuration. Metadata contains leader information
  # used to decide which broker to use when publishing.
  #metadata:
    # Max metadata request retry attempts when cluster is in middle of leader
    # election. Defaults to 3 retries.
    #retry.max: 3
    # Wait time between retries during leader elections. Default is 250ms.
    #retry.backoff: 250ms
    # Refresh metadata interval. Defaults to every 10 minutes.
    #refresh_frequency: 10m
  # The number of concurrent load-balanced Kafka output workers.
  #worker: 1
  # The number of times to retry publishing an event after a publishing failure.
  # After the specified number of retries, events are typically dropped.
  # Some Beats, such as Filebeat, ignore the max_retries setting and retry until
  # all events are published.  Set max_retries to a value less than 0 to retry
  # until all events are published. The default is 3.
  #max_retries: 3
  # The maximum number of events to bulk in a single Kafka request. The default
  # is 2048.
  #bulk_max_size: 2048
  # The number of seconds to wait for responses from the Kafka brokers before
  # timing out. The default is 30s.
  #timeout: 30s
  # The maximum duration a broker will wait for number of required ACKs. The
  # default is 10s.
  #broker_timeout: 10s
  # The number of messages buffered for each Kafka broker. The default is 256.
  #channel_buffer_size: 256
  # The keep-alive period for an active network connection. If 0s, keep-alives
  # are disabled. The default is 0 seconds.
  #keep_alive: 0
  # Sets the output compression codec. Must be one of none, snappy and gzip. The
  # default is gzip.
  #compression: gzip
  # Set the compression level. Currently only gzip provides a compression level
  # between 0 and 9. The default value is chosen by the compression algorithm.
  #compression_level: 4
  # The maximum permitted size of JSON-encoded messages. Bigger messages will be
  # dropped. The default value is 1000000 (bytes). This value should be equal to
  # or less than the broker's message.max.bytes.
  #max_message_bytes: 1000000
  # The ACK reliability level required from broker. 0=no response, 1=wait for
  # local commit, -1=wait for all replicas to commit. The default is 1.  Note:
  # If set to 0, no ACKs are returned by Kafka. Messages might be lost silently
  # on error.
  #required_acks: 1
  # The configurable ClientID used for logging, debugging, and auditing
  # purposes.  The default is "beats".
  #client_id: beats
  # Enable SSL support. SSL is automatically enabled if any SSL setting is set.
  #ssl.enabled: true
  # Optional SSL configuration options. SSL is off by default.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  # Configure SSL verification mode. If `none` is configured, all server hosts
  # and certificates will be accepted. In this mode, SSL based connections are
  # susceptible to man-in-the-middle attacks. Use only for testing. Default is
  # `full`.
  #ssl.verification_mode: full
  # List of supported/valid TLS versions. By default all TLS versions from 1.0 up to
  # 1.2 are enabled.
  #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"
  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"
  # Optional passphrase for decrypting the Certificate Key.
  #ssl.key_passphrase: ''
  # Configure cipher suites to be used for SSL connections
  #ssl.cipher_suites: []
  # Configure curve types for ECDHE-based cipher suites
  #ssl.curve_types: []
  # Configure what types of renegotiation are supported. Valid options are
  # never, once, and freely. Default is never.
  #ssl.renegotiation: never
#------------------------------- Redis output ----------------------------------
#output.redis:
  # Boolean flag to enable or disable the output module.
  #enabled: true
  # Configure JSON encoding
  #codec.json:
    # Pretty print json event
    #pretty: false
    # Configure escaping HTML symbols in strings.
    #escape_html: true
  # The list of Redis servers to connect to. If load-balancing is enabled, the
  # events are distributed to the servers in the list. If one server becomes
  # unreachable, the events are distributed to the reachable servers only.
  #hosts: ["localhost:6379"]
  # The name of the Redis list or channel the events are published to. The
  # default is filebeat.
  #key: filebeat
  # The password to authenticate to Redis with. The default is no authentication.
  #password:
  # The Redis database number where the events are published. The default is 0.
  #db: 0
  # The Redis data type to use for publishing events. If the data type is list,
  # the Redis RPUSH command is used. If the data type is channel, the Redis
  # PUBLISH command is used. The default value is list.
  #datatype: list
  # The number of workers to use for each host configured to publish events to
  # Redis. Use this setting along with the loadbalance option. For example, if
  # you have 2 hosts and 3 workers, in total 6 workers are started (3 for each
  # host).
  #worker: 1
  # If set to true and multiple hosts or workers are configured, the output
  # plugin load balances published events onto all Redis hosts. If set to false,
  # the output plugin sends all events to only one host (determined at random)
  # and will switch to another host if the currently selected one becomes
  # unreachable. The default value is true.
  #loadbalance: true
  # The Redis connection timeout in seconds. The default is 5 seconds.
  #timeout: 5s
  # The number of times to retry publishing an event after a publishing failure.
  # After the specified number of retries, the events are typically dropped.
  # Some Beats, such as Filebeat, ignore the max_retries setting and retry until
  # all events are published. Set max_retries to a value less than 0 to retry
  # until all events are published. The default is 3.
  #max_retries: 3
  # The number of seconds to wait before trying to reconnect to Redis
  # after a network error. After waiting backoff.init seconds, the Beat
  # tries to reconnect. If the attempt fails, the backoff timer is increased
  # exponentially up to backoff.max. After a successful connection, the backoff
  # timer is reset. The default is 1s.
  #backoff.init: 1s
  # The maximum number of seconds to wait before attempting to connect to
  # Redis after a network error. The default is 60s.
  #backoff.max: 60s
  # The maximum number of events to bulk in a single Redis request or pipeline.
  # The default is 2048.
  #bulk_max_size: 2048
  # The URL of the SOCKS5 proxy to use when connecting to the Redis servers. The
  # value must be a URL with a scheme of socks5://.
  #proxy_url:
  # This option determines whether Redis hostnames are resolved locally when
  # using a proxy. The default value is false, which means that name resolution
  # occurs on the proxy server.
  #proxy_use_local_resolver: false
  # Enable SSL support. SSL is automatically enabled, if any SSL setting is set.
  #ssl.enabled: true
  # Configure SSL verification mode. If `none` is configured, all server hosts
  # and certificates will be accepted. In this mode, SSL based connections are
  # susceptible to man-in-the-middle attacks. Use only for testing. Default is
  # `full`.
  #ssl.verification_mode: full
  # List of supported/valid TLS versions. By default all TLS versions 1.0 up to
  # 1.2 are enabled.
  #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
  # Optional SSL configuration options. SSL is off by default.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"
  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"
  # Optional passphrase for decrypting the Certificate Key.
  #ssl.key_passphrase: ''
  # Configure cipher suites to be used for SSL connections
  #ssl.cipher_suites: []
  # Configure curve types for ECDHE based cipher suites
  #ssl.curve_types: []
  # Configure what types of renegotiation are supported. Valid options are
  # never, once, and freely. Default is never.
  #ssl.renegotiation: never
#------------------------------- File output -----------------------------------
#output.file:
  # Boolean flag to enable or disable the output module.
  #enabled: true
  # Configure JSON encoding
  #codec.json:
    # Pretty-print JSON event
    #pretty: false
    # Configure escaping HTML symbols in strings.
    #escape_html: true
  # Path to the directory where to save the generated files. The option is
  # mandatory.
  #path: "/tmp/filebeat"
  # Name of the generated files. The default is `filebeat` and it generates
  # files: `filebeat`, `filebeat.1`, `filebeat.2`, etc.
  #filename: filebeat
  # Maximum size in kilobytes of each file. When this size is reached, and on
  # every filebeat restart, the files are rotated. The default value is 10240
  # kB.
  #rotate_every_kb: 10000
  # Maximum number of files under path. When this number of files is reached,
  # the oldest file is deleted and the rest are shifted from last to first. The
  # default is 7 files.
  #number_of_files: 7
  # Permissions to use for file creation. The default is 0600.
  #permissions: 0600

#----------------------------- Console output ---------------------------------
#output.console:
  # Boolean flag to enable or disable the output module.
  #enabled: true
  # Configure JSON encoding
  #codec.json:
    # Pretty-print JSON event
    #pretty: false
    # Configure escaping HTML symbols in strings.
    #escape_html: true
#================================= Paths ======================================
# The home path for the filebeat installation. This is the default base path
# for all other path settings and for miscellaneous files that come with the
# distribution (for example, the sample dashboards).
# If not set by a CLI flag or in the configuration file, the default for the
# home path is the location of the binary.
#path.home:
# The configuration path for the filebeat installation. This is the default
# base path for configuration files, including the main YAML configuration file
# and the Elasticsearch template file. If not set by a CLI flag or in the
# configuration file, the default for the configuration path is the home path.
#path.config: ${path.home}
# The data path for the filebeat installation. This is the default base path
# for all the files in which filebeat needs to store its data. If not set by a
# CLI flag or in the configuration file, the default for the data path is a data
# subdirectory inside the home path.
#path.data: ${path.home}/data
# The logs path for a filebeat installation. This is the default location for
# the Beat's log files. If not set by a CLI flag or in the configuration file,
# the default for the logs path is a logs subdirectory inside the home path.
#path.logs: ${path.home}/logs
#================================ Keystore ==========================================
# Location of the Keystore containing the keys and their sensitive values.
#keystore.path: "${path.config}/beats.keystore"
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards are disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false
# The directory from where to read the dashboards. The default is the `kibana`
# folder in the home path.
#setup.dashboards.directory: ${path.home}/kibana
# The URL from where to download the dashboards archive. It is used instead of
# the directory if it has a value.
#setup.dashboards.url:
# The file archive (zip file) from where to read the dashboards. It is used instead
# of the directory when it has a value.
#setup.dashboards.file:
# In case the archive contains the dashboards from multiple Beats, this lets you
# select which one to load. You can load all the dashboards in the archive by
# setting this to the empty string.
#setup.dashboards.beat: filebeat
# The name of the Kibana index to use for setting the configuration. Default is ".kibana"
#setup.dashboards.kibana_index: .kibana
# The Elasticsearch index name. This overwrites the index name defined in the
# dashboards and index pattern. Example: testbeat-*
#setup.dashboards.index:
# Always use the Kibana API for loading the dashboards instead of autodetecting
# how to install the dashboards by first querying Elasticsearch.
#setup.dashboards.always_kibana: false
# If true and Kibana is not reachable at the time when dashboards are loaded,
# it will retry to reconnect to Kibana instead of exiting with an error.
#setup.dashboards.retry.enabled: false
# Duration interval between Kibana connection retries.
#setup.dashboards.retry.interval: 1s
# Maximum number of retries before exiting with an error, 0 for unlimited retrying.
#setup.dashboards.retry.maximum: 0

#============================== Template =====================================
# A template is used to set the mapping in Elasticsearch
# By default template loading is enabled and the template is loaded.
# These settings can be adjusted to load your own template or overwrite existing ones.
# Set to false to disable template loading.
#setup.template.enabled: true
# Template name. By default the template name is "filebeat-%{[beat.version]}"
# The template name and pattern has to be set in case the Elasticsearch index pattern is modified.
#setup.template.name: "filebeat-%{[beat.version]}"
# Template pattern. By default the template pattern is "-%{[beat.version]}-*" to apply to the default index settings.
# The first part is the version of the beat and then -* is used to match all daily indices.
# The template name and pattern has to be set in case the Elasticsearch index pattern is modified.
#setup.template.pattern: "filebeat-%{[beat.version]}-*"
# Path to fields.yml file to generate the template
#setup.template.fields: "${path.config}/fields.yml"
# A list of fields to be added to the template and Kibana index pattern. Also
# specify setup.template.overwrite: true to overwrite the existing template.
# This setting is experimental.
#setup.template.append_fields:
#- name: field_name
#  type: field_type
# Enable JSON template loading. If this is enabled, the fields.yml is ignored.
#setup.template.json.enabled: false
# Path to the JSON template file
#setup.template.json.path: "${path.config}/template.json"
# Name under which the template is stored in Elasticsearch
#setup.template.json.name: ""
# Overwrite existing template
#setup.template.overwrite: false
# Elasticsearch template settings
setup.template.settings:
  # A dictionary of settings to place into the settings.index dictionary
  # of the Elasticsearch template. For more details, please check
  # https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html
  #index:
    #number_of_shards: 1
    #codec: best_compression
    #number_of_routing_shards: 30
  # A dictionary of settings for the _source field. For more details, please check
  # https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-source-field.html
  #_source:
    #enabled: false
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"
  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"
  # Optional HTTP path
  #path: ""
  # Use SSL settings for HTTPS. Default is true.
  #ssl.enabled: true
  # Configure SSL verification mode. If `none` is configured, all server hosts
  # and certificates will be accepted. In this mode, SSL based connections are
  # susceptible to man-in-the-middle attacks. Use only for testing. Default is
  # `full`.
  #ssl.verification_mode: full
  # List of supported/valid TLS versions. By default all TLS versions from 1.0 up to
  # 1.2 are enabled.
  #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
  # SSL configuration. The default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"
  # Client certificate key
  #ssl.key: "/etc/pki/client/cert.key"
  # Optional passphrase for decrypting the certificate key.
  #ssl.key_passphrase: ''
  # Configure cipher suites to be used for SSL connections
  #ssl.cipher_suites: []
  # Configure curve types for ECDHE-based cipher suites
  #ssl.curve_types: []

#================================ Logging ======================================
# There are four options for the log output: file, stderr, syslog, eventlog
# The file output is the default.
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: info
# Enable debug output for selected components. To enable all selectors use ["*"]
# Other available selectors are "beat", "publish", "service"
# Multiple selectors can be chained.
#logging.selectors: [ ]
# Send all logging output to syslog. The default is false.
#logging.to_syslog: false
# Send all logging output to Windows Event Logs. The default is false.
#logging.to_eventlog: false
# If enabled, filebeat periodically logs its internal metrics that have changed
# in the last period. For each metric that changed, the delta from the value at
# the beginning of the period is logged. Also, the total values for
# all non-zero internal metrics are logged on shutdown. The default is true.
#logging.metrics.enabled: true
# The period after which to log the internal metrics. The default is 30s.
#logging.metrics.period: 30s
# Logging to rotating files. Set logging.to_files to false to disable logging to
# files.
logging.to_files: true
logging.files:
  # Configure the path where the logs are written. The default is the logs directory
  # under the home path (the binary location).
  #path: /var/log/filebeat
  # The name of the files where the logs are written to.
  #name: filebeat
  # Configure log file size limit. If limit is reached, log file will be
  # automatically rotated
  #rotateeverybytes: 10485760 # = 10MB
  # Number of rotated log files to keep. Oldest files will be deleted first.
  #keepfiles: 7
  # The permissions mask to apply when rotating log files. The default value is 0600.
  # Must be a valid Unix-style file permissions mask expressed in octal notation.
  #permissions: 0600
  # Enable log file rotation on time intervals in addition to size-based rotation.
  # Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h
  # are boundary-aligned with minutes, hours, days, weeks, months, and years as
  # reported by the local system clock. All other intervals are calculated from the
  # Unix epoch. Defaults to disabled.
  #interval: 0
# Set to true to log messages in JSON format.
#logging.json: false

#============================== Xpack Monitoring =====================================
# filebeat can export internal metrics to a central Elasticsearch monitoring cluster.
# This requires xpack monitoring to be enabled in Elasticsearch.
# The reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line, and leave the rest commented out.
#xpack.monitoring.elasticsearch:
  # Array of hosts to connect to.
  # Scheme and port can be left out and will be set to the default (http and 9200)
  # In case you specify and additional path, the scheme is required: http://localhost:9200/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
  #hosts: ["localhost:9200"]
  # Set gzip compression level.
  #compression_level: 0
  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "beats_system"
  #password: "changeme"
  # Dictionary of HTTP parameters to pass within the URL with index operations.
  #parameters:
    #param1: value1
    #param2: value2
  # Custom HTTP headers to add to each request
  #headers:
  #  X-My-Header: Contents of the header
  # Proxy server url
  #proxy_url: http://proxy:3128
  # The number of times a particular Elasticsearch index operation is attempted. If
  # the indexing operation doesn't succeed after this many retries, the events are
  # dropped. The default is 3.
  #max_retries: 3
  # The maximum number of events to bulk in a single Elasticsearch bulk API index request.
  # The default is 50.
  #bulk_max_size: 50
  # The number of seconds to wait before trying to reconnect to Elasticsearch
  # after a network error. After waiting backoff.init seconds, the Beat
  # tries to reconnect. If the attempt fails, the backoff timer is increased
  # exponentially up to backoff.max. After a successful connection, the backoff
  # timer is reset. The default is 1s.
  #backoff.init: 1s
  # The maximum number of seconds to wait before attempting to connect to
  # Elasticsearch after a network error. The default is 60s.
  #backoff.max: 60s
  # Configure HTTP request timeout before failing an request to Elasticsearch.
  #timeout: 90
  # Use SSL settings for HTTPS.
  #ssl.enabled: true
  # Configure SSL verification mode. If `none` is configured, all server hosts
  # and certificates will be accepted. In this mode, SSL based connections are
  # susceptible to man-in-the-middle attacks. Use only for testing. Default is
  # `full`.
  #ssl.verification_mode: full
  # List of supported/valid TLS versions. By default all TLS versions from 1.0 up to
  # 1.2 are enabled.
  #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
  # SSL configuration. The default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"
  # Client certificate key
  #ssl.key: "/etc/pki/client/cert.key"
  # Optional passphrase for decrypting the certificate key.
  #ssl.key_passphrase: ''
  # Configure cipher suites to be used for SSL connections
  #ssl.cipher_suites: []
  # Configure curve types for ECDHE-based cipher suites
  #ssl.curve_types: []
  # Configure what types of renegotiation are supported. Valid options are
  # never, once, and freely. Default is never.
  #ssl.renegotiation: never
  #metrics.period: 10s
  #state.period: 1m
#================================ HTTP Endpoint ======================================
# Each beat can expose internal metrics through a HTTP endpoint. For security
# reasons the endpoint is disabled by default. This feature is currently experimental.
# Stats can be access through http://localhost:5066/stats . For pretty JSON output
# append ?pretty to the URL.
# Defines if the HTTP endpoint is enabled.
#http.enabled: false
# The HTTP endpoint will bind to this hostname or IP address. It is recommended to use only localhost.
#http.host: localhost
# Port on which the HTTP endpoint will bind. Default is 5066.
#http.port: 5066
#============================= Process Security ================================
# Enable or disable seccomp system call filtering on Linux. Default is enabled.
#seccomp.enabled: true


至此,本篇内容完成。

如有问题,请发送邮件至leafming@foxmail.com联系我,谢谢~
0%