• ELK企业级日志分析平台


    环境

    主机

    ip

    角色

    k8s1

    192.168.81.10

    cerebro

    server1

    192.168.81.11

    elasticsearch

    server2

    192.168.81.12

    elasticsearch

    server3

    192.168.81.13

    elasticsearch

    server4

    192.168.81.14

    logstash

    server5

    192.168.81.15

    kibana

     软件安装

    [root@server1 ~]# rpm -ivh elasticsearch-7.6.1-x86_64.rpm
    

    修改配置

    1. [root@server1 ~]# cd /etc/elasticsearch/
    2. [root@server1 elasticsearch]# vim elasticsearch.yml
    3. cluster.name: my-es
    4. path.data: /var/lib/elasticsearch
    5. path.logs: /var/log/elasticsearch
    6. bootstrap.memory_lock: true
    7. network.host: 0.0.0.0
    8. http.port: 9200
    9. discovery.seed_hosts: ["server1", "server2", "server3"]
    10. cluster.initial_master_nodes: ["server1", "server2", "server3"]

    修改系统限制

    1. [root@server1 ~]# vim /etc/security/limits.conf
    2. elasticsearch soft memlock unlimited
    3. elasticsearch hard memlock unlimited
    4. elasticsearch - nofile 65535
    5. elasticsearch - nproc 4096

    修改systemd启动文件

    1. [root@server1 ~]# vim /usr/lib/systemd/system/elasticsearch.service
    2. [service]
    3. ...
    4. LimitMEMLOCK=infinity

    1. [root@server1 ~]# systemctl daemon-reload
    2. [root@server1 ~]# swapoff -a
    3. [root@server1 ~]# vim /etc/fstab
    4. #/dev/mapper/rhel-swap swap swap defaults 0 0
    5. [root@server1 ~]# systemctl daemon-reload
    6. [root@server1 ~]# systemctl enable --now elasticsearch

    server1配置好后,直接把配置复制到server2和server3

    cerebro部署

    cerebro官方:GitHub - lmenezes/cerebro

    使用docker启动服务

    1. [root@k8s1 ~]# docker pull lmenezes/cerebro
    2. [root@k8s1 ~]# docker run -d --name cerebro -p 9000:9000 lmenezes/cerebro

     

    elasticsearch集群角色分类

    Master:
            主要负责集群中索引的创建、删除以及数据的Rebalance等操作。 Master不负责数据的索引和检索,所以负载较轻。当Master节点失联或 者挂掉的时候,ES集群会自动从其他Master节点选举出一个Leader。
    Data Node:
            主要负责集群中数据的索引和检索,一般压力比较大。
    Coordinating Node:
            原来的Client node,主要功能是来分发请求和合并结果的。所有节点默认就是Coordinating node,且不能关闭该属性。
    Ingest Node:
            专门对索引的文档做预处理。
    1. [root@server1 ~]# systemctl stop elasticsearch.service
    2. [root@server1 ~]# vim /etc/elasticsearch/elasticsearch.yml
    3. node.master: true
    4. node.data: false
    5. node.ingest: true
    6. node.ml: false
    7. [root@server1 elasticsearch]# systemctl restart elasticsearch.service
    8. [root@server2 ~]# systemctl stop elasticsearch.service
    9. [root@server2 ~]# vim /etc/elasticsearch/elasticsearch.yml
    10. node.master: true
    11. node.data: true
    12. node.ingest: false
    13. node.ml: false
    14. [root@server2 ~]# systemctl restart elasticsearch.service
    15. [root@server3 ~]# systemctl stop elasticsearch.service
    16. [root@server3 ~]# vim /etc/elasticsearch/elasticsearch.yml
    17. node.master: true
    18. node.data: true
    19. node.ingest: false
    20. node.ml: false
    21. [root@server3 ~]# systemctl restart elasticsearch.service

    elasticsearch节点优化

    logstash

    部署

    1. [root@server4 ~]# yum install -y jdk-11.0.15_linux-x64_bin.rpm
    2. [root@server4 ~]# yum install -y logstash-7.6.1.rpm

    命令方式

    标准输入到标准输出

    [root@server4 bin]# /usr/share/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'
    

    标准输入到文件

    1. [root@server4 conf.d]# vim /etc/logstash/conf.d/file.conf
    2. input {
    3. stdin { }
    4. }
    5. output {
    6. file {
    7. path => "/tmp/logstash.txt" #输出的文件路径
    8. codec => line { format => "custom format: %{message}"} #定制数据格式
    9. }
    10. }
    11. [root@server4 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/file.conf
    12. [root@server4 conf.d]# cat /tmp/logstash.txt

    elasticsearch-head插件

    安装依赖

    1. [root@k8s1 ~]# yum install -y bzip2
    2. [root@k8s1 ~]# tar jxf phantomjs-2.1.1-linux-x86_64.tar.bz2
    3. [root@k8s1 ~]# cd phantomjs-2.1.1-linux-x86_64
    4. [root@k8s1 phantomjs-2.1.1-linux-x86_64]# cp bin/phantomjs /usr/local/bin/
    5. [root@k8s1 ~]# yum install -y fontconfig
    6. [root@k8s1 ~]# phantomjs
    7. phantomjs>

    安装插件

    1. [root@k8s1 ~]# rpm -ivh nodejs-9.11.2-1nodesource.x86_64.rpm
    2. [root@k8s1 ~]# yum install -y unzip
    3. [root@k8s1 ~]# unzip elasticsearch-head-master.zip
    4. [root@k8s1 ~]# cd elasticsearch-head-master/
    5. [root@k8s1 elasticsearch-head-master]# npm install --registry=https://registry.npm.taobao.org
    6. [root@k8s1 elasticsearch-head-master]# vim _site/app.js

    启动服务

    [root@k8s1 elasticsearch-head-master]# npm  run start &
    

    修改es配置

    1. [root@server1 ~]# vim /etc/elasticsearch/elasticsearch.yml
    2. http.cors.enabled: true
    3. http.cors.allow-origin: "*"
    4. [root@server1 ~]# systemctl restart elasticsearch.service

    创建索引

    elasticsearch输出插件

    1. [root@server4 conf.d]# pwd
    2. /etc/logstash/conf.d
    3. [root@server4 conf.d]# vim test.conf
    4. input {
    5. stdin { }
    6. }
    7. output {
    8. stdout {}
    9. elasticsearch {
    10. hosts => "192.168.81.11:9200"
    11. index => "logstash-%{+YYYY.MM.dd}"
    12. }
    13. }
    14. [root@server4 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf

    启动成功后录入数据,ctrl+c退出

    file输入插件

    1. [root@server4 conf.d]# vim test.conf
    2. input {
    3. file {
    4. path => "/var/log/messages"
    5. start_position => "beginning"
    6. }
    7. }
    8. output {
    9. stdout {}
    10. elasticsearch {
    11. hosts => "192.168.81.11:9200"
    12. index => "syslog-%{+YYYY.MM.dd}"
    13. }
    14. }
    15. [root@server4 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf

    .sincedb文件保存文件读取进度,避免数据冗余读取

    1. [root@server4 conf.d]# cd /usr/share/logstash/data/plugins/file
    2. [root@server4 file]# ls -i /var/log/messages
    3. [root@server4 file]# cat .sincedb_452905a167cf4509fd08acb964fdb20c

    sincedb文件一共6个字段

    1. inode编号
    2. 文件系统的主要设备号
    3. 文件系统的次要设备号
    4. 文件中的当前字节偏移量
    5. 最后一个活动时间戳(浮点数)
    6. 与此记录匹配的最后一个已知路径

     删除后重新读取

    [root@server4 file]# rm -f .sincedb_452905a167cf4509fd08acb964fdb20c
    

    syslog 插件

    logstash伪装成日志服务器

    1. [root@server4 conf.d]# vim syslog.conf
    2. input {
    3. syslog {}
    4. }
    5. output {
    6. stdout {}
    7. elasticsearch {
    8. hosts => "192.168.81.11:9200"
    9. index => "syslog-%{+YYYY.MM.dd}"
    10. }
    11. }
    12. [root@server4 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/syslog.conf

    配置客户端日志输出

    1. [root@server1 ~]# vim /etc/rsyslog.conf
    2. $ModLoad imudp
    3. $UDPServerRun 514
    4. *.* @@192.168.56.14:514
    5. [root@server1 ~]# systemctl restart rsyslog.service
    6. [root@server1 ~]# logger server1

    多行过滤插件

    多行过滤可以把多行日志记录合并为一行事件

    从server1拷贝模板文件

    1. [root@server1 elasticsearch]# cd /var/log/elasticsearch
    2. [root@server1 elasticsearch]# scp my-es.log server4:/var/log/
    1. [root@server4 conf.d]# vim test.conf
    2. input {
    3. file {
    4. path => "/var/log/my-es.log"
    5. start_position => "beginning"
    6. codec => multiline {
    7. pattern => "^\["
    8. negate => true
    9. what => previous
    10. }
    11. }
    12. }
    13. output {
    14. stdout {}
    15. elasticsearch {
    16. hosts => "192.168.81.11:9200"
    17. index => "myeslog-%{+YYYY.MM.dd}"
    18. }
    19. }
    20. [root@server4 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf

    grok过滤

    1. [root@server4 ~]# yum install -y httpd
    2. [root@server4 ~]# systemctl enablel --now httpd
    3. [root@server4 ~]# echo www.westos.org > /var/www/html/index.html

    访问此站点生成日志信息

    [root@k8s1 ~]# ab -c1 -n 100 http://192.168.81.14/index.html
    

    1. [root@server4 conf.d]# vim grok.conf
    2. input {
    3. file {
    4. path => "/var/log/httpd/access_log"
    5. start_position => "beginning"
    6. }
    7. }
    8. filter {
    9. grok {
    10. match => { "message" => "%{HTTPD_COMBINEDLOG}" }
    11. }
    12. }
    13. output {
    14. stdout {}
    15. elasticsearch {
    16. hosts => "192.168.81.11:9200"
    17. index => "apachelog-%{+YYYY.MM.dd}"
    18. }
    19. }
    20. [root@server4 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/grok.conf

     

    kibana数据可视化

    部署

    1. [root@server5 ~]# rpm -ivh kibana-7.6.1-x86_64.rpm
    2. [root@server5 ~]# cd /etc/kibana/
    3. [root@server5 kibana]# vim kibana.yml
    4. server.host: "0.0.0.0"
    5. elasticsearch.hosts: ["http://192.168.81.11:9200"]
    6. i18n.locale: "zh-CN"
    7. [root@server5 kibana]# systemctl enable --now kibana

    访问web页面: http://192.168.81.15:5601

     

    定制数据可视化

    访问量排行榜

    创建dashboard,大屏展示

    可以实时监控

  • 相关阅读:
    JS 原型和原型链
    Dubbo源码学习(九) 详解Dubbo里的SPI
    游戏引擎,脚本管理模块
    [事务]-事务概念/特性/并发问题/传播特性
    python实现冒泡排序
    全球第4大操作系统(鸿蒙)的软件后缀.hap
    2022年全国大学生数学建模竞赛D题思路
    【RabbitMQ实战】05 RabbitMQ后台管理
    UG\NX二次开发 清除所有对象高亮
    详解闲鱼推荐系统(长文收藏)
  • 原文地址:https://blog.csdn.net/m0_64028800/article/details/134515403