script day.log

大学生がなんとなく始めた、趣味やら生活のことを記録していく。

fluentd + elasticsearch + kibanaによるログ監視環境構築

少し前よりログ監視はしなければならず,
新規に環境を構築した際にインストールメモを作成したのですが,
折角なのでこちらにも記載しようと思います.

ちなみに先日参加した卒論聴講会ではトラフィックの可視化?というテーマがあったのですが,
どういった環境の可視化を行ったのか,非常に気になるところです.

それでは本編です.

how to install

1.Server: fluentd + elasticsearch + kibana

1.1 fluentd setup (Quickstart Guide | Fluentd)

Step1: install
# Amazon Linux 1
$ curl -L https://toolbelt.treasuredata.com/sh/install-amazon1-td-agent3.sh | sh
# Amazon Linux 2
$ curl -L https://toolbelt.treasuredata.com/sh/install-amazon2-td-agent3.sh | sh
$ curl -L https://toolbelt.treasuredata.com/sh/install-redhat-td-agent3.sh | sh
Step2: Launch Daemon
  • systemed
$ sudo systemctl start td-agent.service
$ sudo systemctl status td-agent.service
  • init.d
$ /etc/init.d/td-agent start 
$ /etc/init.d/td-agent status

1.2 elasticsearch setup (Install Elasticsearch with RPM | Elasticsearch Reference [6.2] | Elastic)

Step1: Java8 install

elasticsearch need java8

$ sudo yum install java-1.8.0-openjdk-devel
$ sudo alternatives --config java //java-1.8.0 select
$ java --version
Step2: elasticsearch install

Download and install the public signing key

$ rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
$ sudo yum install elasticsearch
Step3: Launch Daemon
  • systemed
$ sudo systemctl daemon-reload

$ sudo systemctl start elasticsearch.service
$ sudo systemctl enable elasticsearch.service
$ sudo systemctl status elasticsearch.service
  • init.d
$ /etc/init.d/elasticsearch start 
$ /etc/init.d/elasticsearch status
or
$ sudo -i service elasticsearch start
$ sudo -i service elasticsearch status

$ sudo chkconfig --add elasticsearch
Step 4: X-Pack install (Installing X-Pack in Elasticsearch | Elasticsearch Reference [6.2] | Elastic)
$ cd /usr/share/elasticsearch
$ sudo bin/elasticsearch-plugin install x-pack
Step 5: Password Setup for X-Pack (Getting Started with Security | X-Pack for the Elastic Stack [6.2] | Elastic)
$ cd /usr/share/elasticsearch
$ bin/x-pack/setup-passwords auto
Directory layout of RPM
type Description Default Location Setting
home elasticsearch home directory or $ES_HOME /usr/share/elasticsearch
bin Binary scripts including elasticsearch to start a node and elasticsearch-plugin to install plugins /usr/share/elasticsearch/bin
conf Configuration files including elasticsearch.yml /etc/elasticsearch ES_PATH_CONF
conf CEnvironment variables including heap size, file descriptors. /etc/sysconfig/elasticsearch
data The location of the data files of each index / shard allocated on the node. Can hold multiple locations. /var/lib/elasticsearch path.data
logs Log files location. /var/log/elasticsearch path.logs
plugins Plugin files location. Each plugin will be contained in a subdirectory. /usr/share/elasticsearch/plugins
repo Shared file system repository locations. Can hold multiple locations. A file system repository can be placed in to any subdirectory of any directory specified here. Not configured path.repo

1.3 kibana setup (Install Kibana with RPM | Kibana User Guide [6.2] | Elastic)

Step1: kibana install

Download and install the public signing key

$ sudo yum install kibana
Step2: Launch Daemon

systemed ? init.d ? use this command

$ ps -p 1
  • systemed
$ sudo systemctl daemon-reload

$ sudo systemctl start kibana.service
$ sudo systemctl enable kibana.service
$ sudo systemctl status kibana.service
  • init.d
$ /etc/init.d/kibana start 
$ /etc/init.d/kibana status
or
$ sudo -i service kibana start
$ sudo -i service kibana status

$ sudo chkconfig --add kibana
Step 3: X-Pack install (Installing X-Pack in Kibana | Kibana User Guide [6.2] | Elastic)
$ cd /usr/share/kibana
$ sudo bin/kibana-plugin install x-pack
Step 4: kibana.yml edit
elasticsearch.username: "kibana"
elasticsearch.password: "<kibanapassword">
Directory layout of RPM
type Description Default Location Setting
home Kibana home directory or $KIBANA_HOME /usr/share/kibana
bin Binary scripts including kibana to start the Kibana server and kibana-plugin to install plugins /usr/share/kibana/bin
config Configuration files including kibana.yml /etc/kibana
data The location of the data files written to disk by Kibana and its plugins /var/lib/kibana path.data
optimize Transpiled source code. Certain administrative actions (e.g. plugin install) result in the source code being retranspiled on the fly. /usr/share/kibana/optimize
plugins Plugin files location. Each plugin will be contained in a subdirectory. /usr/share/kibana/plugins

2.elasticsearch settings

make template for elasticsearch

$ vi es_rtx1200_template.json
$ curl -u elastic:<elasticpassword> -H "Content-Type: application/json" -XPUT 127.0.0.1:9200/rtx1200/ -d "`cat es_rtx1200_template.json`"
$ vi es_apache_template.json
$ curl -u elastic:<elasticpassword> -H "Content-Type: application/json" -XPUT 127.0.0.1:9200/apache/ -d "`cat es_apache_template.json`"
$ vi es_nas_template.json
$ curl -u elastic:<elasticpassword> -H "Content-Type: application/json" -XPUT 127.0.0.1:9200/nas/ -d "`cat es_nas_template.json`"
  • es_rtx1200_template.json
elasticsearch_rtx_template.json
{
    "templete": "rtx1200-*",
    "mappings": {
        "_default_": {
            "dynamic_templates": [
                {
                    "string_template" : {
                        "match" : "*",
                        "mapping": {
                            "type": "string",
                            "fields": {
                                "full": {
                                    "type": "string",
                                    "index": "false"
                                }
                            }
                        },
                        "match_mapping_type": "string"
                    }
                }
            ],
            "properties": {
                "@timestamp": { "type": "date", "index": "false" },
                "geo_location": {"type" : "geo_point" }
            }
        }
    }
}
elasticsearch_apache_template.json
{
    "templete": "*.apache-*”,
    "mappings": {
        "_default_": {
            "dynamic_templates": [
                {
                    "string_template" : {
                        "match" : "*",
                        "mapping": {
                            "type": "string",
                            "fields": {
                                "full": {
                                    "type": "string",
                                    "index": "false"
                                }
                            }
                        },
                        "match_mapping_type": "string"
                    }
                }
            ],
            "properties": {
                "@timestamp": { "type": "date", "index": "false" }
            }
        }
    }
}
 
{
    "templete": "myhome-nas-*",
    "mappings": {
        "_default_": {
            "dynamic_templates": [
                {
                    "string_template" : {
                        "match" : "*",
                        "mapping": {
                            "type": "string",
                            "fields": {
                                "full": {
                                    "type": "string",
                                    "index": "false"
                                }
                            }
                        },
                        "match_mapping_type": "string"
                    }
                }
            ],
            "properties": {
                "@timestamp": { "type": "date", "index": "false" }
            }
        }
    }
}

3.fluentd settings

3.fluentd settings

3.1 fluentd-plugin install

$ td-agent-gem install fluent-plugin-rewrite-tag-filter
$ td-agent-gem install fluent-plugin-elasticsearch

$ yum install geoip-devel
$ td-agent-gem install fluent-plugin-geoip -v 0.8.0

$ td-agent-gem install fluent-plugin-multi-format-parser
$ td-agent-gem install fluent-plugin-parser
$ td-agent-gem install fluent-plugin-with-extra-fields-parser

3.2 edit td-agent.conf

td-agent.conf location /etc/td-agent/td-agent.conf

  • index filter
1.rtx1200
# dynamic filter ==> rtx-1200-inspect
# filter reject ==> rtx1200-reject
# console log in/out ==> rtx1200-console
# VPN access on/off ==> rtx1200-tunnel
# others ==> rtx1200-other
2.nas
# all ==> myhome-nas
3.apache
# access_log ==> <prefix>.apache-access
# error_log ==> <prefix>.apache-error
  • td-agent.conf
####
## Source descriptions:
##

## syslog
<source>
  @type tail
  tag raw.rtx1200
  format none
  path /var/log/syslog
  pos_file /var/log/td-agent/syslog.pos
</source>

<source>
  @type forward
  port 24224
  bind 0.0.0.0
  tag log.apache.242
</source>

<source>
  @type forward
  port 24225
  bind 0.0.0.0
  tag log.apache.app
</source>

<source>
  @type forward
  port 24226
  bind 0.0.0.0
  tag log.apache.s
</source>

<source>
  @type forward
  port 24227
  bind 0.0.0.0
  tag log.apache.piro
</source>

####
## Output descriptions:
##
<match raw.rtx1200.**>
  type parser
  format multi_format
  key_name message
  remove_prefix raw
  add_prefix parsed
  <pattern>
      format with_extra_fields
      base_format /\[INSPECT\]\s+(?<target>.+)\[(?<direction>.+)\]\[(?<filter_num>\d+)\]\s+(?<proto>.+)\s+(?<src_ip>.+):(?<src_port>.+)\s+>\s+(?<dest_ip>.+):(?<dest_port>.+)\s+\((?<time>.+)\)$/
      time_format '%Y/%m/%d %H:%M:%S'
      extra_fields { "log_type": "inspect" }
  </pattern>
  <pattern>
      format with_extra_fields
      base_format /(?<target>.+)\s+Rejected\s+at\s+(?<direction>.+)\s+filter:\s+(?<proto>.+)\s+(?<src_ip>.+):(?<src_port>.+)\s+>\s+(?<dest_ip>.+):(?<dest_port>.+)$/
      extra_fields { "log_type": "reject" }
  </pattern>
  <pattern>
      format with_extra_fields
      base_format /Logout\s+from\s+(?<proto>.+):\s+(?<ip>.+)$/
      extra_fields { "log_type": "console_logout" }
  </pattern>
  <pattern>
      format with_extra_fields
      base_format /Login\s+succeeded\s+for\s+(?<proto>.+):\s+(?<ip>.+)$/
      extra_fields { "log_type": "console_login" }
  </pattern>
  <pattern>
      format with_extra_fields
      base_format /\[(?<proto>.+)\]\s+(?<tunnel>.+)\s+connected\s+from\s+(?<src_ip>.+)$/
      extra_fields { "log_type": "tunnel_connect" }
  </pattern>
  <pattern>
      format with_extra_fields
      base_format /\[(?<proto>.+)\]\s+(?<tunnel>.+)\s+disconnect\s+tunnel\s+\d+\s+complete$/
      extra_fields { "log_type": "tunnel_disconnect" }
  </pattern>
   <pattern>
      format with_extra_fields
      base_format /PP\[.*\]\s+Call\s+detected\s+from\s+user\s+\W(?<user>.+)\W$/
      extra_fields { "log_type": "vpnuser" }
  </pattern>
  <pattern>
     format with_extra_fields
     base_format /(?<date>.+)\s(?<machine>eymovic-NAS)\sqlogd\[\d+\]:\s(?<log>.+):\sUsers:\s(?<Users>.+),\sSource\sIP:\s(?<src_ip>.+),\sComputer\sname:\s(?<computer_name>.+),\sConnection\stype:\s(?<connection_type>.+),\sAccessed\sresources:\s(?<accessed_resources>.+),\sAction:\s(?<action>.+)$/
     extra_fields { "log_type": "NAS" }
  </pattern>
  <pattern>
      format with_extra_fields
      base_format /(?<msg>.+)$/
      extra_fields { "log_type": "other" }
  </pattern>

</match>

<match parsed.rtx1200.**>
  type rewrite_tag_filter
  <rule>
  key     log_type
  pattern ^inspect$
  tag     rtx1200.inspect
  </rule>
  <rule>
  key     log_type
  pattern ^reject$
  tag     temp.rtx1200.reject
  </rule>
  <rule>
  key     log_type
  pattern ^console_(.+)$
  tag     rtx1200.console.$1
  </rule>
  <rule>
  key     log_type
  pattern ^tunnel_(.+)$
  tag     temp.rtx1200.tunnel.$1
  </rule>
  <rule>
  key     log_type
  pattern ^vpnuser$
  tag     rtx1200.vpnuser
  </rule>
  <rule>
  key     log_type
  pattern ^NAS$
  tag     myhome.nas
  </rule>
  <rule>
  key     log_type
  pattern ^other$
  tag     rtx1200.other
  </rule>
</match>

<match rtx1200.inspect.**>
  type elasticsearch
  logstash_format true
  logstash_prefix rtx1200-inspect
  include_tag_key true
  tag_key @log_name
  hosts localhost:9200
  buffer_type memory
  num_threads 1
  flush_interval 60
  retry_wait 1.0
  retry_limit 17
  user elastic
  password <elasticsearch password>
</match>

<match temp.rtx1200.reject.**>
  type  geoip
  geoip_lookup_key src_ip
  <record>
    geo_location  '{ "lat" : ${latitude["src_ip"]}, "lon" : ${longitude["src_ip"]} }'
    country_code  ${country_code["src_ip"]}
  </record>
  remove_tag_prefix temp.
  skip_adding_null_record  true
  flush_interval 1s
</match>
<match rtx1200.reject.**>
  type elasticsearch
  logstash_format true
  logstash_prefix rtx1200-reject
  include_tag_key true
  tag_key @log_name
  hosts localhost:9200
  buffer_type memory
  num_threads 1
  flush_interval 60
  retry_wait 1.0
  retry_limit 17
  user elastic
  password <elasticsearch password>
</match>

<match rtx1200.console.**>
  type elasticsearch
  logstash_format true
  logstash_prefix rtx1200-console
  include_tag_key true
  tag_key @log_name
  hosts localhost:9200
  buffer_type memory
  num_threads 1
  flush_interval 60
  retry_wait 1.0
  retry_limit 17
  user elastic
  password <elasticsearch password>
</match>

<match temp.rtx1200.tunnel.**>
  type  geoip
  geoip_lookup_key src_ip
  <record>
    geo_location  '{ "lat" : ${latitude["src_ip"]}, "lon" : ${longitude["src_ip"]} }'
    country_code  ${country_code["src_ip"]}
  </record>
  remove_tag_prefix temp.
  skip_adding_null_record  true
  flush_interval 1s
</match>
<match rtx1200.tunnel.**>
  type elasticsearch
  logstash_format true
  logstash_prefix rtx1200-tunnel
  include_tag_key true
  tag_key @log_name
  hosts localhost:9200
  buffer_type memory
  num_threads 1
  flush_interval 60
  retry_wait 1.0
  retry_limit 17
  user elastic
  password <elasticsearch password>
</match>

<match rtx1200.vpnuser.**>
  type elasticsearch
  logstash_format true
  logstash_prefix rtx1200-vpnuser
  include_tag_key true
  tag_key @log_name
  hosts localhost:9200
  buffer_type memory
  num_threads 1
  flush_interval 60
  retry_wait 1.0
  retry_limit 17
  user elastic
  password <elasticsearch password>
</match>

<match rtx1200.other.**>
  type elasticsearch
  logstash_format true
  logstash_prefix rtx1200-other
  include_tag_key true
  tag_key @log_name
  hosts localhost:9200
  buffer_type memory
  num_threads 1
  flush_interval 60
  retry_wait 1.0
  retry_limit 17
  user elastic
  password <elasticsearch password>
</match>

<match myhome.nas.**>
  type elasticsearch
  logstash_format true
  logstash_prefix myhome-nas
  include_tag_key true
  tag_key @log_name
  hosts localhost:9200
  buffer_type memory
  num_threads 1
  flush_interval 60
  retry_wait 1.0
  retry_limit 17
  user elastic
  password <elasticsearch password>
</match>

<match log.apache.242.**>
  type parser
  format multi_format
  key_name message
  remove_prefix log
  add_prefix parsed
  <pattern>
      # format with_extra_fields
      # base_format /(?<src_ip>.+)\s-\s-\s\[(?<date>.+)\]\s\"(?<method>.+)\".*\"(?<dest_url>.+)\"\s\"(?<browser>.+)\"/
      # time_format '%Y/%m/%d %H:%M:%S'
      format with_extra_fields
      base_format /^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$/
      time_format %d/%b/%Y:%H:%M:%S %z
      extra_fields { "log_type": "access" }
  </pattern>
  <pattern>
      # format with_extra_fields
      # base_format /\[(?<date>.+)\]\s\[(?<event>.+)\]\s\[.+\]\s(?<message>.+)/
      # time_format '%Y/%m/%d %H:%M:%S'
      format with_extra_fields
      base_format /^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])? \[client (?<client>[^\]]*)\] (?<message>.*)$/
      time_format %d/%b/%Y:%H:%M:%S %z
      extra_fields { "log_type": "error" }
  </pattern>
</match>

<match parsed.apache.242.**>
  type rewrite_tag_filter
  <rule>
  key     log_type
  pattern ^access$
  tag     242.apache.access
  </rule>
  <rule>
  key     log_type
  pattern ^error$
  tag     242.apache.error
  </rule>
</match>

<match 242.apache.access>
    type elasticsearch
    logstash_format true
    logstash_prefix 242.apache-access
    include_tag_key true
    tag_key @log_name
    hosts localhost:9200
    buffer_type memory
    num_threads 1
    flush_interval 60
    retry_wait 1.0
    retry_limit 17
    user elastic
    password <elasticsearch password>
</match>

<match 242.apache.error>
    type elasticsearch
    logstash_format true
    logstash_prefix 242.apache-error
    include_tag_key true
    tag_key @log_name
    hosts localhost:9200
    buffer_type memory
    num_threads 1
    flush_interval 60
    retry_wait 1.0
    retry_limit 17
    user elastic
    password <elasticsearch password>
</match>

<match log.apache.app.**>
  type parser
  format multi_format
  key_name message
  remove_prefix log
  add_prefix parsed
  <pattern>
      # format with_extra_fields
      # base_format /(?<src_ip>.+)\s-\s-\s\[(?<date>.+)\]\s\"(?<method>.+)\".*\"(?<dest_url>.+)\"\s\"(?<browser>.+)\"/
      # time_format '%Y/%m/%d %H:%M:%S'
      format with_extra_fields
      base_format /^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$/
      time_format %d/%b/%Y:%H:%M:%S %z
      extra_fields { "log_type": "access" }
  </pattern>
  <pattern>
      # format with_extra_fields
      # base_format /\[(?<date>.+)\]\s\[(?<event>.+)\]\s\[.+\]\s(?<message>.+)/
      # time_format '%Y/%m/%d %H:%M:%S'
      format with_extra_fields
      base_format /^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])? \[client (?<client>[^\]]*)\] (?<message>.*)$/
      time_format %d/%b/%Y:%H:%M:%S %z
      extra_fields { "log_type": "error" }
  </pattern>
</match>

<match parsed.apache.app.**>
  type rewrite_tag_filter
  <rule>
  key     log_type
  pattern ^access$
  tag     app.apache.access
  </rule>
  <rule>
  key     log_type
  pattern ^error$
  tag     app.apache.error
  </rule>
</match>

<match app.apache.access>
    type elasticsearch
    logstash_format true
    logstash_prefix app.apache-access
    include_tag_key true
    tag_key @log_name
    hosts localhost:9200
    buffer_type memory
    num_threads 1
    flush_interval 60
    retry_wait 1.0
    retry_limit 17
    user elastic
    password <elasticsearch password>
</match>

<match app.apache.error>
    type elasticsearch
    logstash_format true
    logstash_prefix app.apache-error
    include_tag_key true
    tag_key @log_name
    hosts localhost:9200
    buffer_type memory
    num_threads 1
    flush_interval 60
    retry_wait 1.0
    retry_limit 17
    user elastic
    password <elasticsearch password>
</match>

<match log.apache.s.**>
  type parser
  format multi_format
  key_name message
  remove_prefix log
  add_prefix parsed
  <pattern>
      # format with_extra_fields
      # base_format /(?<src_ip>.+)\s-\s-\s\[(?<date>.+)\]\s\"(?<method>.+)\".*\"(?<dest_url>.+)\"\s\"(?<browser>.+)\"/
      # time_format '%Y/%m/%d %H:%M:%S'
      format with_extra_fields
      base_format /^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$/
      time_format %d/%b/%Y:%H:%M:%S %z
      extra_fields { "log_type": "access" }
  </pattern>
  <pattern>
      # format with_extra_fields
      # base_format /\[(?<date>.+)\]\s\[(?<event>.+)\]\s\[.+\]\s(?<message>.+)/
      # time_format '%Y/%m/%d %H:%M:%S'
      format with_extra_fields
      base_format /^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])? \[client (?<client>[^\]]*)\] (?<message>.*)$/
      time_format %d/%b/%Y:%H:%M:%S %z
      extra_fields { "log_type": "error" }
  </pattern>
</match>

<match parsed.apache.s.**>
  type rewrite_tag_filter
  <rule>
  key     log_type
  pattern ^access$
  tag     s.apache.access
  </rule>
  <rule>
  key     log_type
  pattern ^error$
  tag     s.apache.error
  </rule>
</match>

<match s.apache.access>
    type elasticsearch
    logstash_format true
    logstash_prefix s.apache-access
    include_tag_key true
    tag_key @log_name
    hosts localhost:9200
    buffer_type memory
    num_threads 1
    flush_interval 60
    retry_wait 1.0
    retry_limit 17
    user elastic
    password <elasticsearch password>
</match>

<match s.apache.error>
    type elasticsearch
    logstash_format true
    logstash_prefix s.apache-error
    include_tag_key true
    tag_key @log_name
    hosts localhost:9200
    buffer_type memory
    num_threads 1
    flush_interval 60
    retry_wait 1.0
    retry_limit 17
    user elastic
    password <elasticsearch password>
</match>

<match log.apache.piro.**>
  type parser
  format multi_format
  key_name message
  remove_prefix log
  add_prefix parsed
  <pattern>
      # format with_extra_fields
      # base_format /(?<src_ip>.+)\s-\s-\s\[(?<date>.+)\]\s\"(?<method>.+)\".*\"(?<dest_url>.+)\"\s\"(?<browser>.+)\"/
      # time_format '%Y/%m/%d %H:%M:%S'
      format with_extra_fields
      base_format /^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$/
      time_format %d/%b/%Y:%H:%M:%S %z
      extra_fields { "log_type": "access" }
  </pattern>
  <pattern>
      # format with_extra_fields
      # base_format /\[(?<date>.+)\]\s\[(?<event>.+)\]\s\[.+\]\s(?<message>.+)/
      # time_format '%Y/%m/%d %H:%M:%S'
      format with_extra_fields
      base_format /^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])? \[client (?<client>[^\]]*)\] (?<message>.*)$/
      time_format %d/%b/%Y:%H:%M:%S %z
      extra_fields { "log_type": "error" }
  </pattern>
</match>

<match parsed.apache.piro.**>
  type rewrite_tag_filter
  <rule>
  key     log_type
  pattern ^access$
  tag     piro.apache.access
  </rule>
  <rule>
  key     log_type
  pattern ^error$
  tag     piro.apache.error
  </rule>
</match>

<match piro.apache.access>
    type elasticsearch
    logstash_format true
    logstash_prefix piro.apache-access
    include_tag_key true
    tag_key @log_name
    hosts localhost:9200
    buffer_type memory
    num_threads 1
    flush_interval 60
    retry_wait 1.0
    retry_limit 17
    user elastic
    password <elasticsearch password>
</match>

<match piro.apache.error>
    type elasticsearch
    logstash_format true
    logstash_prefix piro.apache-error
    include_tag_key true
    tag_key @log_name
    hosts localhost:9200
    buffer_type memory
    num_threads 1
    flush_interval 60
    retry_wait 1.0
    retry_limit 17
    user elastic
    password <elasticsearch password>
</match>

4.send log to fluentd server

4.1 RTX1200 & NAS

Set from the browser

4.2 apache

Install fluentd(td-agent) to send apache logs

Installation refers to 1.1

edit td-agent.conf * td-agent.conf

td-agent.conf (send server)
## Input
<source>
  @type tail
  path /var/log/httpd/access_log 
  format apache
    tag apache.access
</source>
<source>
  @type tail
  path /var/log/httpd/error_log 
  format apache
    tag apache.error
</source>
<source>
  @type tail
  path /var/log/httpd/ssl_access_log 
  format apache
    tag apache.access
</source>
<source>
  @type tail
  path /var/log/httpd/ssl_error_log 
  format apache
    tag apache.error
</source>

## Output
<match apache.access>
   @type forward
  send_timeout 60s
  <server>
    host <server ip>
    port <port>
  </server>
</match>
<match apache.error>
   @type forward
  send_timeout 60s
  <server>
    host <server ip>
    port <port>
  </server>
</match>

5.kibana access from the browser

browser access this ip

http://<server ip>:5601

basic auth

plesse input this

ID:elastic
Password:<elastic password>

create index patttern

please create index patterns from the browser
examples:rtx1200-*

confirm discovery

please confirm discovery from the browser

Watcher setting(Alerting on Cluster and Index Events | X-Pack for the Elastic Stack [5.4] | Elastic)

Watcher is one of the functions of x-pack and provides alert function.

seeting from kibana on the browser.

example setting(send to Chatwork)

{
  "trigger": {
    "schedule": {
      "interval": "1m"
    }
  },
  "input": {
    "search": {
      "request": {
        "search_type": "query_then_fetch",
        "indices": [
          "rtx1200-tunnel-*"
        ],
        "types": [],
        "body": {
          "query": {
            "range": {
              "@timestamp": {
                "gte": "now-3m"
              }
            }
          }
        }
      }
    }
  },
  "condition": {
    "compare": {
      "ctx.payload.hits.total": {
        "gte": 3
      }
    }
  },
  "actions": {
    "send_chat": {
      "webhook": {
        "scheme": "https",
        "host": "api.chatwork.com",
        "port": 443,
        "method": "post",
        "path": "/v2/rooms/<room id>/messages",
        "params": {
          "body": "[info][title]VPN Log[/title]type:{{ctx.payload.hits.hits.2._source.log_type}},src-ip:{{ctx.payload.hits.hits.2._source.src_ip}}:{{ctx.payload.hits.hits.2._source.@timestamp}}\ntype:{{ctx.payload.hits.hits.1._source.log_type}},src-ip:{{ctx.payload.hits.hits.1._source.src_ip}}:{{ctx.payload.hits.hits.1._source.@timestamp}}\ntype:{{ctx.payload.hits.hits.0._source.log_type}},src-ip:{{ctx.payload.hits.hits.0._source.src_ip}}:{{ctx.payload.hits.hits.0._source.@timestamp}}[/info]"
        },
        "headers": {
          "X-ChatWorkToken": "<ChatWorkToken>"
        }
      }
    }
  }
}

6.Management Tools

cerebro(GitHub - lmenezes/cerebro)

cerebro can work with easticsearch and manipulate the created index with the GUI.

$ wget https://github.com/lmenezes/cerebro/releases/download/v0.7.2/cerebro-0.7.2.zip
$ unzip cerebro-*.zip
$ cd cerebro-*
$ bin/cerebro

please your browser access localhost:9000

curator(Curator Reference [5.4] | Elastic)

curator can work with elasticsearch to close and delete index existing in elasticsearch.

Step 1 install

  • pip
$ pip install elasticsearch-curator
$ rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
$ yum install elasticsearch-curator
$ rpm -ivh https://packages.elastic.co/curator/5/centos/7/Packages/elasticsearch-curator-5.4.1-1.x86_64.rpm

Step 2 create curator.yml and action_file

create curator.yml in ~/.curator

  • curator.yml
client:
  hosts:
    - localhost
  port: 9200
  url_prefix:
  use_ssl: False
  certificate:
  client_cert:
  client_key:
  ssl_no_validate: False
  http_auth: elastic:<elasticsearch password>
# if x-pack is installed, set "http_auth: elastic:changeme"
  timeout: 30
  master_only: False

logging:
  loglevel: INFO
  logfile:
  logformat: default
  blacklist: ['elasticsearch', 'urllib3']

create action_file named delete_indices

  • delete_indices
actions:
  1:
    action: delete_indices
    description: >-
      (custommessage)Delete indices older than 5 days (based on index name), for .monitoring-es-
      prefixed indices. Ignore the error if the filter does not result in an
      actionable list of indices (ignore_empty_list) and exit cleanly.(custom message)
    options:
      ignore_empty_list: True
      timeout_override:
      continue_if_exception: False
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: rtx1200-
      exclude:
    - filtertype: age
      source: name
      direction: older
      timestring: '%Y.%m.%d'
      unit: days
      unit_count: 5
      exclude:
  2:
    action: close
    description: >-
      (custom message)Close indices older than 4 days (based on index name), for .monitoring-es-
      prefixed indices.(custom message)
    options:
      ignore_empty_list: True
      delete_aliases: False
      timeout_override:
      continue_if_exception: False
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: rtx1200-
      exclude:
    - filtertype: age
      source: name
      direction: older
      timestring: '%Y.%m.%d'
      unit: days
      unit_count: 4
      exclude:
  3:
    action: delete_indices
    description: >-
      (custommessage)Delete indices older than 5 days (based on index name), for .monitoring-es-
      prefixed indices. Ignore the error if the filter does not result in an
      actionable list of indices (ignore_empty_list) and exit cleanly.(custom message)
    options:
      ignore_empty_list: True
      timeout_override:
      continue_if_exception: False
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: apache
      exclude:
    - filtertype: age
      source: name
      direction: older
      timestring: '%Y.%m.%d'
      unit: days
      unit_count: 5
      exclude:
  4:
    action: close
    description: >-
      (custom message)Close indices older than 4 days (based on index name), for .monitoring-es-
      prefixed indices.(custom message)
    options:
      ignore_empty_list: True
      delete_aliases: False
      timeout_override:
      continue_if_exception: False
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: apache
      exclude:
    - filtertype: age
      source: name
      direction: older
      timestring: '%Y.%m.%d'
      unit: days
      unit_count: 4
      exclude:

  5:
    action: delete_indices
    description: >-
      (custommessage)Delete indices older than 5 days (based on index name), for .monitoring-es-
      prefixed indices. Ignore the error if the filter does not result in an
      actionable list of indices (ignore_empty_list) and exit cleanly.(custom message)
    options:
      ignore_empty_list: True
      timeout_override:
      continue_if_exception: False
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: .
      exclude:
    - filtertype: age
      source: name
      direction: older
      timestring: '%Y.%m.%d'
      unit: days
      unit_count: 5
      exclude:
  6:
    action: close
    description: >-
      (custom message)Close indices older than 4 days (based on index name), for .monitoring-es-
      prefixed indices.(custom message)
    options:
      ignore_empty_list: True
      delete_aliases: False
      timeout_override:
      continue_if_exception: False
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: .
      exclude:
    - filtertype: age
      source: name
      direction: older
      timestring: '%Y.%m.%d'
      unit: days
      unit_count: 4
      exclude:
7:
    action: delete_indices
    description: >-
      (custommessage)Delete indices older than 5 days (based on index name), for .monitoring-es-
      prefixed indices. Ignore the error if the filter does not result in an
      actionable list of indices (ignore_empty_list) and exit cleanly.(custom message)
    options:
      ignore_empty_list: True
      timeout_override:
      continue_if_exception: False
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: myhome-nas
      exclude:
    - filtertype: age
      source: name
      direction: older
      timestring: '%Y.%m.%d'
      unit: days
      unit_count: 5
      exclude:
  8:
    action: close
    description: >-
      (custom message)Close indices older than 4 days (based on index name), for .monitoring-es-
      prefixed indices.(custom message)
    options:
      ignore_empty_list: True
      delete_aliases: False
      timeout_override:
      continue_if_exception: False
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: myhome-nas
      exclude:
    - filtertype: age
      source: name
      direction: older
      timestring: '%Y.%m.%d'
      unit: days
      unit_count: 4
      exclude:

Step3 Run curator

run once

$ curator ~/.curator/delete_indices

run regularly

$ crontab -e

example cron

00 02 * * * curator ~/.curator/delete_indices