• Elasticsearch,Logstash和Kibana安装部署(ELK Stack)


    前言

    当今数字化时代,信息的快速增长使得各类组织和企业面临着海量数据的处理和分析挑战。在这样的背景下,ELK Stack(Elasticsearch、Logstash 和 Kibana)作为一套强大的开源工具组合,成为了解决数据管理、搜索和可视化的首选方案。无论是监控日志、实时数据分析,还是构建仪表盘来监测业务指标,ELK Stack 都提供了一站式的解决方案。

    ELK Stack 的每个组件都扮演着关键的角色:

    • Elasticsearch: 作为分布式搜索和分析引擎,Elasticsearch 可以高效地存储、搜索和分析海量数据。其强大的全文搜索能力和分布式架构使得在海量数据中快速定位所需信息成为可能。
    • Logstash: 这是一个用于数据收集、转换和传输的数据处理引擎。它能够从各种数据源中采集数据,经过处理后发送到 Elasticsearch 或其他目标。无论是日志、事件数据还是指标,Logstash 可以将数据标准化,并将其准确地传送到适当的位置。
    • Kibana: 作为 ELK Stack 的可视化工具,Kibana 提供了直观友好的用户界面,让用户能够通过创建仪表盘、图表和可视化来探索、分析和展示数据。这使得即便对数据分析没有深入专业知识的人员,也能够从数据中提取有价值的见解。

    在本文档中,我们将深入探讨如何安装、配置和使用 ELK Stack。

    系统环境如下

    • 系统:ubuntu20.04 LTS
    • 硬件:8核12G 500G

    安装JAVA

    sudo apt-get update
    #安装对应系统版本JDK,使用java --version查看相应jdk安装版本
    
    apt install openjdk-16-jre-headless

    添加ELK存储库

    1. wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
    2. sh -c 'echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" > /etc/apt/sources.list.d/elastic-8.x.list'

    更新软件源

    apt-get update

    安装Elasticsearch

    apt-get install elasticsearch

    安装完成加入开机启动并重启

    1. sudo systemctl daemon-reload
    2. systemctl enable elasticsearch.service && systemctl start elasticsearch.service

    为elasticsearch生成密码作为登录使用,用户名是elastic,密码会在屏幕随机生成。

    cd /usr/share/elasticsearch && bin/elasticsearch-reset-password -u elastic

    注意备份elasticsearch原始文件,以防丢失想要恢复无法恢复。

    cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bak

    生成Enrollment token,第一次登录时候需要验证。

    cd /usr/share/elasticsearch && bin/elasticsearch-create-enrollment-token --scope kibana

    安装Kibana

    1. apt install kibana
    2. systemctl enable kibana.service && systemctl start kibana.service
    3. systemctl stop kibana.service && systemctl start kibana.service

    生成Enrollment token后所需要的验证码

    cd /usr/share/kibana/ && bin/kibana-verification-code

    注意ELK中所说的L是指Logstash,本文以安装filebeat为收集工具。

    Logstash 和 Filebeat 都是用于数据收集和传输的工具,但它们在功能和使用方面有一些区别。以下是它们之间的主要区别:

    Logstash:

    Logstash 是一个功能强大的数据收集、转换和传输引擎。它的主要功能是将不同来源的数据(如日志、事件、指标等)收集起来,进行过滤、解析、转换,然后将处理后的数据发送到指定的目标,如 Elasticsearch、其他存储系统或分析工具。Logstash 的主要特点包括:

    1. 数据处理能力: Logstash 提供了丰富的插件,能够对数据进行多种处理,如解析、过滤、标准化等,以确保数据在传输之前得到适当的处理。
    2. 多样的数据源: Logstash 可以从多种数据源中采集数据,包括日志文件、网络流量、消息队列等,使得它在处理各种数据类型和格式时非常有用。
    3. 数据传输: Logstash 可以将处理后的数据发送到多种目标,如 Elasticsearch、文件、消息队列等,以满足不同的数据存储和分析需求。
    4. 灵活性: Logstash 的配置非常灵活,您可以通过配置文件定义数据流的各个阶段,从而实现高度定制化的数据处理流程。

    Filebeat:

    Filebeat 是一个轻量级的日志数据传输工具,专门用于从文件系统中收集日志数据并将其传输到中央存储或分析系统。它的主要特点包括:

    1. 轻量级: Filebeat 被设计为轻量级工具,占用资源较少,适用于部署在资源有限的环境中。
    2. 实时性: Filebeat 可以实时监测日志文件的变化,一旦日志发生更新,它会立即传输变更的部分,确保实时性。
    3. 简化的数据处理: Filebeat 的主要功能是将日志数据收集并传输,而数据处理方面的功能较弱。它不像 Logstash 那样能进行复杂的数据解析和处理。
    4. 易于部署: 由于 Filebeat 轻量级的特点,它适用于分布式部署和轻松扩展。

    总之,Logstash 更适合需要对数据进行复杂处理和转换的场景,而 Filebeat 则适用于轻量级、实时的日志传输需求。在实际应用中,可以根据具体需求选择使用 Logstash、Filebeat,或两者的结合,以构建适合的数据收集和传输方案。

    安装filebeat采集工具

    1. curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.9.0-amd64.deb
    2. dpkg -i filebeat-8.9.0-amd64.deb
    3. systemctl start filebeat && systemctl enable filebeat

    安装完检查对应软件status是否正常,接下来开始配置

    elasticsearch配置

    vi /etc/elasticsearch/elasticsearch.yml

    这里主要主机端口号

    network.host: 127.0.0.1
    http.port: 9200

    全部配置如下,仅供参考。

    1. # ======================== Elasticsearch Configuration =========================
    2. #
    3. # NOTE: Elasticsearch comes with reasonable defaults for most settings.
    4. # Before you set out to tweak and tune the configuration, make sure you
    5. # understand what are you trying to accomplish and the consequences.
    6. #
    7. # The primary way of configuring a node is via this file. This template lists
    8. # the most important settings you may want to configure for a production cluster.
    9. #
    10. # Please consult the documentation for further information on configuration options:
    11. # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
    12. #
    13. # ---------------------------------- Cluster -----------------------------------
    14. #
    15. # Use a descriptive name for your cluster:
    16. #
    17. #cluster.name: my-application
    18. #
    19. # ------------------------------------ Node ------------------------------------
    20. #
    21. # Use a descriptive name for the node:
    22. #
    23. #node.name: node-1
    24. #
    25. # Add custom attributes to the node:
    26. #
    27. #node.attr.rack: r1
    28. #
    29. # ----------------------------------- Paths ------------------------------------
    30. #
    31. # Path to directory where to store the data (separate multiple locations by comma):
    32. #
    33. path.data: /var/lib/elasticsearch
    34. #
    35. # Path to log files:
    36. #
    37. path.logs: /var/log/elasticsearch
    38. #
    39. # ----------------------------------- Memory -----------------------------------
    40. #
    41. # Lock the memory on startup:
    42. #
    43. #bootstrap.memory_lock: true
    44. #
    45. # Make sure that the heap size is set to about half the memory available
    46. # on the system and that the owner of the process is allowed to use this
    47. # limit.
    48. #
    49. # Elasticsearch performs poorly when the system is swapping the memory.
    50. #
    51. # ---------------------------------- Network -----------------------------------
    52. #
    53. # By default Elasticsearch is only accessible on localhost. Set a different
    54. # address here to expose this node on the network:
    55. #
    56. #network.host: 192.168.0.1
    57. #
    58. # By default Elasticsearch listens for HTTP traffic on the first free port it
    59. # finds starting at 9200. Set a specific HTTP port here:
    60. #
    61. #http.port: 9200
    62. network.host: 127.0.0.1
    63. http.port: 9200
    64. #
    65. # For more information, consult the network module documentation.
    66. #
    67. # --------------------------------- Discovery ----------------------------------
    68. #
    69. # Pass an initial list of hosts to perform discovery when this node is started:
    70. # The default list of hosts is ["127.0.0.1", "[::1]"]
    71. #
    72. #discovery.seed_hosts: ["host1", "host2"]
    73. #
    74. # Bootstrap the cluster using an initial set of master-eligible nodes:
    75. #
    76. #cluster.initial_master_nodes: ["node-1", "node-2"]
    77. #
    78. # For more information, consult the discovery and cluster formation module documentation.
    79. #
    80. # ---------------------------------- Various -----------------------------------
    81. #
    82. # Allow wildcard deletion of indices:
    83. #
    84. #action.destructive_requires_name: false
    85. #----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
    86. #
    87. # The following settings, TLS certificates, and keys have been automatically
    88. # generated to configure Elasticsearch security features on 09-08-2023 02:38:11
    89. #
    90. # --------------------------------------------------------------------------------
    91. # Enable security features
    92. xpack.security.enabled: true
    93. xpack.security.enrollment.enabled: true
    94. # Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
    95. xpack.security.http.ssl:
    96. enabled: true
    97. keystore.path: certs/http.p12
    98. # Enable encryption and mutual authentication between cluster nodes
    99. xpack.security.transport.ssl:
    100. enabled: true
    101. verification_mode: certificate
    102. keystore.path: certs/transport.p12
    103. truststore.path: certs/transport.p12
    104. # Create a new cluster with the current node only
    105. # Additional nodes can still join the cluster later
    106. cluster.initial_master_nodes: ["ubuntu"]
    107. # Allow HTTP API connections from anywhere
    108. # Connections are encrypted and require user authentication
    109. http.host: 0.0.0.0
    110. #logger.org.elasticsearch: "ERROR"
    111. # Allow other nodes to join the cluster from anywhere
    112. # Connections are encrypted and mutually authenticated
    113. #transport.host: 0.0.0.0
    114. #----------------------- END SECURITY AUTO CONFIGURATION -------------------------

    kibana配置

    1. # For more configuration options see the configuration guide for Kibana in
    2. # https://www.elastic.co/guide/index.html
    3. # =================== System: Kibana Server ===================
    4. # Kibana is served by a back end server. This setting specifies the port to use.
    5. #server.port: 5601
    6. # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
    7. # The default is 'localhost', which usually means remote machines will not be able to connect.
    8. # To allow connections from remote users, set this parameter to a non-loopback address.
    9. server.host: "123.58.97.169"
    10. # Enables you to specify a path to mount Kibana at if you are running behind a proxy.
    11. # Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
    12. # from requests it receives, and to prevent a deprecation warning at startup.
    13. # This setting cannot end in a slash.
    14. #server.basePath: ""
    15. # Specifies whether Kibana should rewrite requests that are prefixed with
    16. # `server.basePath` or require that they are rewritten by your reverse proxy.
    17. # Defaults to `false`.
    18. #server.rewriteBasePath: false
    19. # Specifies the public URL at which Kibana is available for end users. If
    20. # `server.basePath` is configured this URL should end with the same basePath.
    21. #server.publicBaseUrl: ""
    22. # The maximum payload size in bytes for incoming server requests.
    23. #server.maxPayload: 1048576
    24. # The Kibana server's name. This is used for display purposes.
    25. #server.name: "your-hostname"
    26. # =================== System: Kibana Server (Optional) ===================
    27. # Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
    28. # These settings enable SSL for outgoing requests from the Kibana server to the browser.
    29. #server.ssl.enabled: false
    30. #server.ssl.certificate: /path/to/your/server.crt
    31. #server.ssl.key: /path/to/your/server.key
    32. # =================== System: Elasticsearch ===================
    33. # The URLs of the Elasticsearch instances to use for all your queries.
    34. #elasticsearch.hosts: ["http://localhost:9200"]
    35. # If your Elasticsearch is protected with basic authentication, these settings provide
    36. # the username and password that the Kibana server uses to perform maintenance on the Kibana
    37. # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
    38. # is proxied through the Kibana server.
    39. #elasticsearch.username: "kibana_system"
    40. #elasticsearch.password: "pass"
    41. # Kibana can also authenticate to Elasticsearch via "service account tokens".
    42. # Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
    43. # Use this token instead of a username/password.
    44. # elasticsearch.serviceAccountToken: "my_token"
    45. # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
    46. # the elasticsearch.requestTimeout setting.
    47. #elasticsearch.pingTimeout: 1500
    48. # Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
    49. # must be a positive integer.
    50. #elasticsearch.requestTimeout: 30000
    51. # The maximum number of sockets that can be used for communications with elasticsearch.
    52. # Defaults to `Infinity`.
    53. #elasticsearch.maxSockets: 1024
    54. # Specifies whether Kibana should use compression for communications with elasticsearch
    55. # Defaults to `false`.
    56. #elasticsearch.compression: false
    57. # List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
    58. # headers, set this value to [] (an empty list).
    59. #elasticsearch.requestHeadersWhitelist: [ authorization ]
    60. # Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
    61. # by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
    62. #elasticsearch.customHeaders: {}
    63. # Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
    64. #elasticsearch.shardTimeout: 30000
    65. # =================== System: Elasticsearch (Optional) ===================
    66. # These files are used to verify the identity of Kibana to Elasticsearch and are required when
    67. # xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
    68. #elasticsearch.ssl.certificate: /path/to/your/client.crt
    69. #elasticsearch.ssl.key: /path/to/your/client.key
    70. # Enables you to specify a path to the PEM file for the certificate
    71. # authority for your Elasticsearch instance.
    72. #elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
    73. # To disregard the validity of SSL certificates, change this setting's value to 'none'.
    74. #elasticsearch.ssl.verificationMode: full
    75. # =================== System: Logging ===================
    76. # Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
    77. #logging.root.level: debug
    78. # Enables you to specify a file where Kibana stores log output.
    79. logging:
    80. appenders:
    81. file:
    82. type: file
    83. fileName: /var/log/kibana/kibana.log
    84. layout:
    85. type: json
    86. root:
    87. appenders:
    88. - default
    89. - file
    90. # layout:
    91. # type: json
    92. # Logs queries sent to Elasticsearch.
    93. #logging.loggers:
    94. # - name: elasticsearch.query
    95. # level: debug
    96. # Logs http responses.
    97. #logging.loggers:
    98. # - name: http.server.response
    99. # level: debug
    100. # Logs system usage information.
    101. #logging.loggers:
    102. # - name: metrics.ops
    103. # level: debug
    104. # =================== System: Other ===================
    105. # The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
    106. #path.data: data
    107. # Specifies the path where Kibana creates the process ID file.
    108. pid.file: /run/kibana/kibana.pid
    109. # Set the interval in milliseconds to sample system and process performance
    110. # metrics. Minimum is 100ms. Defaults to 5000ms.
    111. #ops.interval: 5000
    112. # Specifies locale to be used for all localizable strings, dates and number formats.
    113. # Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
    114. #i18n.locale: "en"
    115. # =================== Frequently used (Optional)===================
    116. # =================== Saved Objects: Migrations ===================
    117. # Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.
    118. # The number of documents migrated at a time.
    119. # If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
    120. # use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
    121. #migrations.batchSize: 1000
    122. # The maximum payload size for indexing batches of upgraded saved objects.
    123. # To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
    124. # This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
    125. # configuration option. Default: 100mb
    126. #migrations.maxBatchSizeBytes: 100mb
    127. # The number of times to retry temporary migration failures. Increase the setting
    128. # if migrations fail frequently with a message such as `Unable to complete the [...] step after
    129. # 15 attempts, terminating`. Defaults to 15
    130. #migrations.retryAttempts: 15
    131. # =================== Search Autocomplete ===================
    132. # Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
    133. # This value must be a whole number greater than zero. Defaults to 1000ms
    134. #unifiedSearch.autocomplete.valueSuggestions.timeout: 1000
    135. # Maximum number of documents loaded by each shard to generate autocomplete suggestions.
    136. # This value must be a whole number greater than zero. Defaults to 100_000
    137. #unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000
    138. i18n.locale: "zh-CN"
    139. # This section was automatically generated during setup.
    140. elasticsearch.hosts: ['https://123.58.97.169:9200']
    141. elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE2OTE1NDk3NTYyNDE6NE55LU1IdVFRRTY0UkVpUloyZDhQdw
    142. elasticsearch.ssl.certificateAuthorities: [/var/lib/kibana/ca_1691549757740.crt]
    143. xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://123.58.97.169:9200'], ca_trusted_fingerprint: 27991095e8dddf17d06a00968bd1b693fc906ea2d52d9f5563134505625791f1}]

    常见问题

    1、为什么我添加了仪表盘面板不显示?

    答:当确保索引配置都正确的同时,不要忘记“sudo filebeat setup”初始化面板。执行初始化即可。

    2、安装了filebeat,启用和配置 system 模块后,模块状态点击检查数据 显示“未连接”

    答:造成此现象是filebeat的系统配置modules.d/system.yml文件未正确配置文件集,也就是找不到文件路径。配置正确后,systemctl status filebeat 查看运行状态并检查是否有错误日志。

    3、为什么在索引管理里删除不了索引?

    答:删除索引需要先暂停数据源服务,例如使用filebeat,需要先systemctl stop filebeat ,随后点击索引管理里的数据流,点击删除数据流即可删除数据流里的索引。

  • 相关阅读:
    iOS 通知扩展插件
    MongoDB集群和安全
    Lua博客网站支持搜索、评论、登录注册
    共享模型之管程
    TCP服务器使用多路复用
    干货分享 | 关于同星硬件接口卡及TSMaster软件常见问题Q&A指南
    RabbitMQ深入 —— 持久化和发布确认
    领跑两轮电动车江湖,谁是“关键先生”?
    面试题收集
    行为型模式-观察者模式
  • 原文地址:https://blog.csdn.net/JackMaF/article/details/132688193