ELK Stack (on docker)
Table of Contents
希望(kiba)の花(na) 繋いだ絆を
部署
参照 Docker Compose。
需要注意的点:
-
默认情况下,这份 Docker Compose 文件使用了 Docker Volume 进行持久化(于 services -> elasticsearch -> volumes)。按需进行重新编写。
-
首次启动以后,重设密码:
docker-compose exec -T elasticsearch bin/elasticsearch-setup-passwords auto --batch
将输出保存,并重置
./kibana/config/kibana.yml
,./logstash/config/logstash.yml
和./logstash/pipeline/logstash.conf
中对应的用户和密码区。
配置
需求:能接受从 UDP 埠 1514 收到的日志;将其中部分信息改名。
传入日志:
<8>Nov 4 05:14:19 ***SOFT {"app":"INT","flow_start_time":"TIMESTAMP","server_total_packet":"INT","client_total_byte":"INT","client_total_packet":"INT","netlink":"INT","client_ip_addr":"IP","flow_end_time":"TIMESTAMP","client_port":"INT","log_type":"UDP","protocol":"INT","server_port":"INT","server_ip_addr":"IP","server_total_byte":"INT","direction_mask":"INT","host_name":"STRING"}
Pipeline 配置如下:
./logstash/pipeline/logstash.conf
:
input {
# default
beats {
port => 5044
}
# default
tcp {
port => 5000
}
# port to use
udp {
port => 1514
}
}
filter {
grok {
# 匹配传入日志的 JSON 部分
match => {
"message" => "(***SOFT) %{GREEDYDATA:input_log_json}"
}
}
json {
source => "input_log_json"
# https://doc.yonyoucloud.com/doc/logstash-best-practice-cn/filter/json.html
# 没有 target 的时候,会将 JSON 内容直接在顶级解包。
}
# https://doc.yonyoucloud.com/doc/logstash-best-practice-cn/filter/mutate.html
# 变换(重命名操作)
mutate {
rename => ["app", "ID"]
}
mutate {
rename => ["flow_start_time", "start_time"]
}
mutate {
rename => ["flow_end_time", "end_time"]
}
mutate {
rename => ["client_port", "sport"]
}
mutate {
rename => ["server_port", "dport"]
}
mutate {
rename => ["client_ip_addr", "src"]
}
mutate {
rename => ["server_ip_addr", "dsc"]
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
user => "elastic"
password => "****"
ecs_compatibility => disabled
index => "**-%{+YYYY.MM.dd}"
}
}
同时在 docker-compose.yml
中开启对应端口:
...
services:
...
logstash:
...
ports:
- ...
- "1514:1514/udp"