目录
Elasticsearch 是一个开源的分布式搜索分析引擎,建立在一个全文搜索引擎库 Apache Lucene基础之上。
Elasticsearch 不仅仅是 Lucene,并且也不仅仅只是一个全文搜索引擎:
基础模块
elasticsearch应用场景:
| 主机 | ip | 角色 |
| docker | 192.168.67.10 | cerebro/elasticsearch-head |
| elk1 | 192.168.67.31 | elasticsearch |
| elk2 | 192.168.67.32 | elasticsearch |
| elk3 | 192.168.67.33 | elasticsearch |
| elk4 | 192.168.67.34 | logstash |
| elk5 | 192.168.67.35 | kibana |
软件安装
rpm -ivh elasticsearch-7.6.1-x86_64.rpm

修改配置
- cluster.name: my-es
- path.data: /var/lib/elasticsearch
- path.logs: /var/log/elasticsearch
- bootstrap.memory_lock: true
- network.host: 0.0.0.0
- http.port: 9200
- discovery.seed_hosts: ["server1", "server2", "server3"]
- cluster.initial_master_nodes: ["server1", "server2", "server3"]




系统设置
- vim /etc/security/limits.conf
- elasticsearch soft memlock unlimited
- elasticsearch hard memlock unlimited
- elasticsearch - nofile 65535
- elasticsearch - nproc 4096
-
-
- vim /usr/lib/systemd/system/elasticsearch.service
- [service]
- ...
- LimitMEMLOCK=infinity
- systemctl daemon-reload
-
-
- swapoff -a
-
- vim /etc/fstab
- #/dev/mapper/rhel-swap swap swap defaults 0 0
-
- systemctl daemon-reload
- systemctl enable --now elasticsearch




完成部署:
- docker pull lmenezes/cerebro
- docker run -d --name cerebro -p 9000:9000 lmenezes/cerebro

安装依赖
- yum install -y nodejs-9.11.2-1nodesource.x86 64.rpm
-
- tar xf phantomjs-2.1.1-linux-x86 64.tar.bz2
- cd phantomjs-2.1.1-linux-x86 64/
- cd bin/
- mv phantomjs /usr/local/bin/
- phantomjs


安装插件
- rpm -ivh nodejs-9.11.2-1nodesource.x86_64.rpm
- unzip elasticsearch-head-master.zip
- cd elasticsearch-head-master/
- npm install --registry=https://registry.npm.taobao.org
- vim _site/app.js



启动服务
- npm run start &
- netstat -antlp|grep :9100


修改es配置
- vim /etc/elasticsearch/elasticsearch.yml
- http.cors.enabled: true
- http.cors.allow-origin: "*"
- systemctl restart elasticsearch.service



创建索引

查看ES状态
注意:此属性的值为true,并不意味着这个节点就是主节点。因为真正的主节点,是由多个具有主节点资格的节点进行选举产生的。
这样的配置可能会导致数据写入不均匀,建议只指定一个数据路径,磁盘可以使用raid0阵列,而不需要成本高的ssd。
- vim /etc/elasticsearch/elasticsearch.yml
- node.master: true
- node.data: false
- node.ingest: true
- node.ml: false
- 等组合 node.ingest: true 至少一个节点要有

如果重启有错误 这个上面有数据需要清理迁移到其他节点



查看:



不同插件查看




新建一台虚拟机elk4部署logstash
- yum install -y jdk-11.0.15_linux-x64_bin.rpm
- yum install -y logstash-7.6.1.rpm


命令方式
/usr/share/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'

- cd /etc/logstash/conf.d
- vim test.conf
-
- input {
- stdin { }
- }
-
- output {
- stdout {}
-
- elasticsearch {
- hosts => "192.168.67.31:9200"
- index => "logstash-%{+YYYY.MM.dd}"
- }
- }
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf





- vim fileput.conf
-
- input {
- file {
- path => "/var/log/messages"
- start_position => "beginning"
- }
- }
-
- output {
- stdout {}
-
- elasticsearch {
- hosts => "192.168.67.31:9200"
- index => "syslog-%{+YYYY.MM.dd}"
- }
-
- }
-
- /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/fileput.conf




.sincedb文件保存文件读取进度,避免数据冗余读取
cd /usr/share/logstash/data/plugins/inputs/file/

sincedb文件一共6个字段
删除后重新读取
![]()
- vim file.conf
-
- input {
- stdin { }
- }
- output {
- file {
- path => "/tmp/logstash.txt"
- codec => line { format => "custom format: %{message}"}
- }
- }
-
-
- /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/file.conf


- vim syslog.conf
-
- input {
- syslog {}
- }
-
- output {
- stdout {}
-
- elasticsearch {
- hosts => "192.168.67.31:9200"
- index => "rsyslog-%{+YYYY.MM.dd}"
- }
-
-
- }
-
- /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/syslog.conf


- vim /etc/rsyslog.conf
- 去掉以下行的注释
- $ModLoad imudp
- $UDPServerRun 514
-
-
- *.* @@192.168.36.34:514



多行过滤可以把多行日志记录合并为一行事件
- cd /var/log/elasticsearch
-
- scp my-es.log elk4:/var/log/

在elk4上执行
- vim multiline.conf
-
- input {
-
- file {
- path => "/var/log/my-es.log"
- start_position => "beginning"
- codec => multiline {
- pattern => "^\["
- negate => true
- what => previous
- }
- }
-
- }
-
- output {
- stdout {}
-
- elasticsearch {
- hosts => "192.168.67.31:9200"
- index => "myeslog-%{+YYYY.MM.dd}"
- }
-
- }
-
- /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/multiline.conf




安装httpd
- yum install -y httpd
- systemctl enablel --now httpd
- echo www.westos.org > /var/www/html/index.html


访问此站点生成日志信息
ab -c1 -n 300 http://192.168.67.34/index.html

编写文件
- vim grok.conf
-
- input {
- file {
- path => "/var/log/httpd/access_log"
- start_position => "beginning"
- }
- }
-
- filter {
- grok {
- match => { "message" => "%{HTTPD_COMBINEDLOG}" }
- }
- }
-
- output {
- stdout {}
-
- elasticsearch {
- hosts => "192.168.67.31:9200"
- index => "apachelog-%{+YYYY.MM.dd}"
- }
-
- }
-
- /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/grok.conf
-
-
-




新建虚拟机elk5,部署kibana
rpm -ivh kibana-7.6.1-x86_64.rpm

修改配置文件
- server.host: "0.0.0.0"
-
- elasticsearch.hosts: ["http://192.168.67.31:9200"]
-
- i18n.locale: "zh-CN"



启动
- systemctl enable --now kibana
- netstat -antlp |grep :5601

访问:

创建索引







提前在各个节点 ab -c1 -n 500 http://192.168.67.34/index.html 一下



保存视图

把上面创建的两个可视化添加到仪表板中

