在我们的许多练习中,我需要模拟索引生命管理周期的环境。在这样的集群中,我们需要模拟出热(hot),温(warm)及冷(cold)的架构。那么我们该如何进行模拟呢?

我们参考之前的文章 “Elasticsearch:使用 Docker compose 来一键部署 Elastic Stack 8.x”。我们使用 docker compose 来进行部署。
首先,我们按照如下的步骤来进行修改。
.env 文件设置运行 docker-compose.yml 配置文件时使用的环境变量。 确保使用 ELASTIC_PASSWORD 和 KIBANA_PASSWORD 变量为 elastic 和 kibana_system 用户指定密码。 这些变量由 docker-compose.yml 文件引用。
- # Password for the 'elastic' user (at least 6 characters)
- ELASTIC_PASSWORD=password
-
- # Password for the 'kibana_system' user (at least 6 characters)
- KIBANA_PASSWORD=password
-
- # Version of Elastic products
- STACK_VERSION=8.5.1
-
- # Set the cluster name
- CLUSTER_NAME=docker-cluster
-
- # Set to 'basic' or 'trial' to automatically start the 30-day trial
- LICENSE=basic
- #LICENSE=trial
-
- # Port to expose Elasticsearch HTTP API to the host
- ES_PORT=9200
- #ES_PORT=127.0.0.1:9200
-
- # Port to expose Kibana to the host
- KIBANA_PORT=5601
- #KIBANA_PORT=80
-
- # Increase or decrease based on the available host memory (in bytes)
- MEM_LIMIT=1073741824
-
- # Project namespace (defaults to the current folder name if not set)
- #COMPOSE_PROJECT_NAME=myproject
在上面,我们设定 Elastic Stack 的版本为最新的 8.5.1。在这个文件中,我们同时也设定 elastic 这个超级用户的密码。
这个 docker-compose.yml 文件创建了一个启用了身份验证和网络加密的三节点安全 Elasticsearch 集群,以及一个安全连接到它的 Kibana 实例。在我们的设置中,我们分别设置三个节点为 data_hot,data_warm 及 data_cold。有关 data tiers 的介绍,请阅读我的另外一篇文章 “Elastic:Data tiers 介绍及索引生命周期管理 - 7.10 之后版本”。
修改后的 docker-compose.yml 文件如下:
docker-compose.yml
- version: "2.2"
-
- services:
- setup:
- image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
- volumes:
- - certs:/usr/share/elasticsearch/config/certs
- user: "0"
- command: >
- bash -c '
- if [ x${ELASTIC_PASSWORD} == x ]; then
- echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
- exit 1;
- elif [ x${KIBANA_PASSWORD} == x ]; then
- echo "Set the KIBANA_PASSWORD environment variable in the .env file";
- exit 1;
- fi;
- if [ ! -f certs/ca.zip ]; then
- echo "Creating CA";
- bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
- unzip config/certs/ca.zip -d config/certs;
- fi;
- if [ ! -f certs/certs.zip ]; then
- echo "Creating certs";
- echo -ne \
- "instances:\n"\
- " - name: es01\n"\
- " dns:\n"\
- " - es01\n"\
- " - localhost\n"\
- " ip:\n"\
- " - 127.0.0.1\n"\
- " - name: es02\n"\
- " dns:\n"\
- " - es02\n"\
- " - localhost\n"\
- " ip:\n"\
- " - 127.0.0.1\n"\
- " - name: es03\n"\
- " dns:\n"\
- " - es03\n"\
- " - localhost\n"\
- " ip:\n"\
- " - 127.0.0.1\n"\
- > config/certs/instances.yml;
- bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
- unzip config/certs/certs.zip -d config/certs;
- fi;
- echo "Setting file permissions"
- chown -R root:root config/certs;
- find . -type d -exec chmod 750 \{\} \;;
- find . -type f -exec chmod 640 \{\} \;;
- echo "Waiting for Elasticsearch availability";
- until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
- echo "Setting kibana_system password";
- until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
- echo "All done!";
- '
- healthcheck:
- test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
- interval: 1s
- timeout: 5s
- retries: 120
-
- es01:
- depends_on:
- setup:
- condition: service_healthy
- image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
- volumes:
- - certs:/usr/share/elasticsearch/config/certs
- - esdata01:/usr/share/elasticsearch/data
- ports:
- - ${ES_PORT}:9200
- environment:
- - node.name=es01
- - node.roles=data_hot,data_content,master,ingest
- - cluster.name=${CLUSTER_NAME}
- - cluster.initial_master_nodes=es01,es02,es03
- - discovery.seed_hosts=es02,es03
- - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- - bootstrap.memory_lock=true
- - xpack.security.enabled=true
- - xpack.security.http.ssl.enabled=true
- - xpack.security.http.ssl.key=certs/es01/es01.key
- - xpack.security.http.ssl.certificate=certs/es01/es01.crt
- - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- - xpack.security.http.ssl.verification_mode=certificate
- - xpack.security.transport.ssl.enabled=true
- - xpack.security.transport.ssl.key=certs/es01/es01.key
- - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- - xpack.security.transport.ssl.verification_mode=certificate
- - xpack.license.self_generated.type=${LICENSE}
- mem_limit: ${MEM_LIMIT}
- ulimits:
- memlock:
- soft: -1
- hard: -1
- healthcheck:
- test:
- [
- "CMD-SHELL",
- "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
- ]
- interval: 10s
- timeout: 10s
- retries: 120
-
- es02:
- depends_on:
- - es01
- image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
- volumes:
- - certs:/usr/share/elasticsearch/config/certs
- - esdata02:/usr/share/elasticsearch/data
- environment:
- - node.name=es02
- - node.roles=data_warm,data_content,master,ingest
- - cluster.name=${CLUSTER_NAME}
- - cluster.initial_master_nodes=es01,es02,es03
- - discovery.seed_hosts=es01,es03
- - bootstrap.memory_lock=true
- - xpack.security.enabled=true
- - xpack.security.http.ssl.enabled=true
- - xpack.security.http.ssl.key=certs/es02/es02.key
- - xpack.security.http.ssl.certificate=certs/es02/es02.crt
- - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- - xpack.security.http.ssl.verification_mode=certificate
- - xpack.security.transport.ssl.enabled=true
- - xpack.security.transport.ssl.key=certs/es02/es02.key
- - xpack.security.transport.ssl.certificate=certs/es02/es02.crt
- - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- - xpack.security.transport.ssl.verification_mode=certificate
- - xpack.license.self_generated.type=${LICENSE}
- mem_limit: ${MEM_LIMIT}
- ulimits:
- memlock:
- soft: -1
- hard: -1
- healthcheck:
- test:
- [
- "CMD-SHELL",
- "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
- ]
- interval: 10s
- timeout: 10s
- retries: 120
-
- es03:
- depends_on:
- - es02
- image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
- volumes:
- - certs:/usr/share/elasticsearch/config/certs
- - esdata03:/usr/share/elasticsearch/data
- environment:
- - node.name=es03
- - node.roles=data_cold,data_content,master,ingest
- - cluster.name=${CLUSTER_NAME}
- - cluster.initial_master_nodes=es01,es02,es03
- - discovery.seed_hosts=es01,es02
- - bootstrap.memory_lock=true
- - xpack.security.enabled=true
- - xpack.security.http.ssl.enabled=true
- - xpack.security.http.ssl.key=certs/es03/es03.key
- - xpack.security.http.ssl.certificate=certs/es03/es03.crt
- - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- - xpack.security.http.ssl.verification_mode=certificate
- - xpack.security.transport.ssl.enabled=true
- - xpack.security.transport.ssl.key=certs/es03/es03.key
- - xpack.security.transport.ssl.certificate=certs/es03/es03.crt
- - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- - xpack.security.transport.ssl.verification_mode=certificate
- - xpack.license.self_generated.type=${LICENSE}
- mem_limit: ${MEM_LIMIT}
- ulimits:
- memlock:
- soft: -1
- hard: -1
- healthcheck:
- test:
- [
- "CMD-SHELL",
- "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
- ]
- interval: 10s
- timeout: 10s
- retries: 120
-
- kibana:
- depends_on:
- es01:
- condition: service_healthy
- es02:
- condition: service_healthy
- es03:
- condition: service_healthy
- image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
- volumes:
- # - ./kibana.yml:/usr/share/kibana/config/kibana.yml
- - certs:/usr/share/kibana/config/certs
- - kibanadata:/usr/share/kibana/data
- ports:
- - ${KIBANA_PORT}:5601
- environment:
- - SERVERNAME=kibana
- - ELASTICSEARCH_HOSTS=https://es01:9200
- - ELASTICSEARCH_USERNAME=kibana_system
- - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
- mem_limit: ${MEM_LIMIT}
- healthcheck:
- test:
- [
- "CMD-SHELL",
- "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
- ]
- interval: 10s
- timeout: 10s
- retries: 120
-
- volumes:
- certs:
- driver: local
- esdata01:
- driver: local
- esdata02:
- driver: local
- esdata03:
- driver: local
- kibanadata:
- driver: local
在上面,我们已经创建好了如下的文件:
- $ pwd
- /Users/liuxg/data/elastic8
- $ ls -al
- total 40
- drwxr-xr-x 5 liuxg staff 160 Oct 11 08:07 .
- drwxr-xr-x 166 liuxg staff 5312 Nov 15 14:32 ..
- -rw-r--r-- 1 liuxg staff 728 Nov 17 07:13 .env
- -rw-r--r-- 1 liuxg staff 8321 Nov 17 07:30 docker-compose.yml
我们使用如下的命令来启动 Elasticsearch 集群:
docker-compose up
或者:
docker-compose up -d
如果你想让 docker-compose 在后台运行的话。

直到我们看到如下的画面:
我们在浏览器中启动 Kibana:
在上面,我们输入密码 password 即可登录:
我们在 console 中打入如下的命令:
GET _cat/nodes
我们可以看到如下的结果:
- 172.21.0.5 43 100 10 0.33 0.89 0.66 cims - es03
- 172.21.0.3 49 96 9 0.33 0.89 0.66 hims - es01
- 172.21.0.4 72 100 10 0.33 0.89 0.66 imsw * es02
如果你不想使用 Kibana,你也可以使用 curl 命令来查看:
curl -k -u elastic:password https://localhost:9200/_cat/nodes
- $ curl -k -u elastic:password https://localhost:9200/_cat/nodes
- 172.21.0.5 77 100 4 0.27 0.62 0.59 cims - es03
- 172.21.0.3 64 97 4 0.27 0.62 0.59 hims - es01
- 172.21.0.4 69 100 4 0.27 0.62 0.59 imsw * es02
或者更为确切的显示:
GET _cat/nodes?v

从上面的输出中,我们可以看到每个节点的 role。如果我们想了解上面的 cims 各个代表什么意义,请参阅另外一篇文章 “Elasticsearch:Node roles 介绍 - 7.9 之后版本”。字母的意思是:
比如针对 es02 这个节点,imsw 代表的意思是:ingest node,master eligible node,content node,warm node。
这样我们就创建了我们所需要的热,温及冷架构的 Elasticsearch 集群。我们可以在这个集群上测试我们的 ILM(索引生命周期管理)练习。