| 名称 | 值 |
|---|---|
| cpu | Intel® Core™ i5-1035G1 CPU @ 1.00GHz |
| 操作系统 | CentOS Linux release 7.9.2009 (Core) |
| 内存 | 4G |
| 逻辑核数 | 3 |
| 原有节点1-IP | 192.168.142.10 |
| 扩容节点2-IP | 192.168.142.11 |
| 数据库版本 | 8.6.2.43-R33.132743 |
[root@localhost ~]# gcadmin
CLUSTER STATE: ACTIVE
CLUSTER MODE: NORMAL
=====================================================================
| GBASE COORDINATOR CLUSTER INFORMATION |
=====================================================================
| NodeName | IpAddress |gcware |gcluster |DataState |
---------------------------------------------------------------------
| coordinator1 | 192.168.142.10 | OPEN | OPEN | 0 |
---------------------------------------------------------------------
=================================================================
| GBASE DATA CLUSTER INFORMATION |
=================================================================
|NodeName | IpAddress |gnode |syncserver |DataState |
-----------------------------------------------------------------
| node1 | 192.168.142.10 | OPEN | OPEN | 0 |
-----------------------------------------------------------------
由于我们需要添加管理节点,所以必须要关闭所有节点服务,如果只添加数据节点,可以停止服务。添加管理节点,不停服务,报错如下:
some gcluster process still running on host 192.168.142.10, use 'pidof gclusterd gbased corosync gcmonit gcrecover gc_sync_server;' to check.
Must stop all gcluster nodes before extend gcluster. you can search 'still running' in gcinstall.log to find them.
每个节点都需要执行。
[root@localhost ~]# service gcware stop
Stopping GCMonit success!
Signaling GCRECOVER (gcrecover) to terminate: [ 确定 ]
Waiting for gcrecover services to unload:.... [ 确定 ]
Signaling GCSYNC (gc_sync_server) to terminate: [ 确定 ]
Waiting for gc_sync_server services to unload: [ 确定 ]
Signaling GCLUSTERD to terminate: [ 确定 ]
Waiting for gclusterd services to unload:........ [ 确定 ]
Signaling GBASED to terminate: [ 确定 ]
Waiting for gbased services to unload:.... [ 确定 ]
Signaling GCWARE (gcware) to terminate: [ 确定 ]
Waiting for gcware services to unload:. [ 确定 ]
每个节点都可以看一下,建议看,不看也OK。
[root@localhost ~]# ps -ef|grep gbase
root 4177 3591 0 16:43 pts/0 00:00:00 grep --color=auto gbase
[root@localhost gcluster]# cd /opt/pkg/gcinstall/
[root@localhost gcinstall]# ll
总用量 93272
-rwxrwxrwx. 1 root root 435 8月 7 20:27 192.168.142.10.options
-rwxrwxrwx. 1 root root 435 8月 7 20:27 192.168.142.11.options
-rw-r--r--. 1 gbase gbase 292 12月 17 2021 BUILDINFO
-rw-r--r--. 1 gbase gbase 2249884 12月 17 2021 bundle_data.tar.bz2
-rw-r--r--. 1 gbase gbase 87478657 12月 17 2021 bundle.tar.bz2
-rw-r--r--. 1 gbase gbase 1951 12月 17 2021 CGConfigChecker.py
-rw-r--r--. 1 root root 1895 8月 7 20:27 CGConfigChecker.pyc
-rw-r--r--. 1 gbase gbase 309 12月 17 2021 cluster.conf
-rwxr-xr-x. 1 gbase gbase 4167 12月 17 2021 CorosyncConf.py
-rw-r--r--. 1 gbase gbase 420 8月 7 20:26 demo.options
-rw-r--r--. 1 gbase gbase 154 12月 17 2021 dependRpms
-rw-r--r--. 1 gbase gbase 684 12月 17 2021 example.xml
-rwxr-xr-x. 1 gbase gbase 419 12月 17 2021 extendCfg.xml
-rw-r--r--. 1 gbase gbase 781 12月 17 2021 FileCheck.py
-rw-r--r--. 1 root root 1173 8月 7 20:27 FileCheck.pyc
-rw-r--r--. 1 gbase gbase 2700 12月 17 2021 fulltext.py
-rw-r--r--. 1 gbase gbase 4818440 12月 17 2021 gbase_data_timezone.sql
-rw-r--r--. 1 gbase gbase 137 8月 7 20:29 gcChangeInfo.xml
-rwxrw-rw-. 1 root root 13109 8月 7 20:29 gcinstall.log
-rwxr-xr-x. 1 gbase gbase 76282 12月 17 2021 gcinstall.py
-rwxrwxrwx. 1 gbase gbase 3362 12月 17 2021 GetOSType.py
-rw-r--r--. 1 gbase gbase 156505 12月 17 2021 InstallFuns.py
-rw-r--r--. 1 root root 126295 8月 7 20:27 InstallFuns.pyc
-rw-r--r--. 1 gbase gbase 237364 12月 17 2021 InstallTar.py
-rw-r--r--. 1 gbase gbase 1114 12月 17 2021 license.txt
-rwxr-xr-x. 1 gbase gbase 296 12月 17 2021 loginUserPwd.json
-rwxr-xr-x. 1 gbase gbase 75990 12月 17 2021 pexpect.py
-rw-r--r--. 1 root root 63064 8月 7 20:27 pexpect.pyc
-rwxr-xr-x. 1 gbase gbase 25093 12月 17 2021 replace.py
-rw-r--r--. 1 gbase gbase 1715 12月 17 2021 RestoreLocal.py
-rwxr-xr-x. 1 gbase gbase 6622 12月 17 2021 Restore.py
-rw-r--r--. 1 gbase gbase 7312 12月 17 2021 rmt.py
-rw-r--r--. 1 root root 5625 8月 7 20:27 rmt.pyc
-rwxr-xr-x. 1 gbase gbase 296 12月 17 2021 rootPwd.json
-rw-r--r--. 1 gbase gbase 2717 12月 17 2021 SSHThread.py
-rw-r--r--. 1 root root 3823 8月 7 20:27 SSHThread.pyc
-rwxr-xr-x. 1 gbase gbase 21710 12月 17 2021 unInstall.py
-rw-r--r--. 1 root root 17079 8月 7 20:27 unInstall.pyc
[root@localhost gcinstall]# cat demo.options
installPrefix= /opt
coordinateHost = 192.168.142.11
coordinateHostNodeID = 234,235,237
dataHost = 192.168.142.11
existCoordinateHost = 192.168.142.10
existDataHost = 192.168.142.10
loginUser= root
loginUserPwd = 'qwer1234'
#loginUserPwdFile = loginUserPwd.json
dbaUser = gbase
dbaGroup = gbase
dbaPwd = 'gbase'
rootPwd = 'qwer1234'
#rootPwdFile = rootPwd.json
dbRootPwd = ''
#mcastAddr = 226.94.1.39
mcastPort = 5493
[root@localhost gcinstall]# su - gbase
上一次登录:五 8月 12 09:10:09 CST 2022pts/2 上
[gbase@localhost gcinstall]$ ./gcinstall.py --silent=demo.options
*********************************************************************************
Thank you for choosing GBase product!
Please read carefully the following licencing agreement before installing GBase product:
TIANJIN GENERAL DATA TECHNOLOGY CO., LTD. LICENSE AGREEMENT
READ THE TERMS OF THIS AGREEMENT AND ANY PROVIDED SUPPLEMENTAL LICENSETERMS (COLLECTIVELY "AGREEMENT") CAREFULLY BEFORE OPENING THE SOFTWAREMEDIA PACKAGE. BY OPENING THE SOFTWARE MEDIA PACKAGE, YOU AGREE TO THE TERMS OF THIS AGREEMENT. IF YOU ARE ACCESSING THE SOFTWARE ELECTRONICALLY, INDICATE YOUR ACCEPTANCE OF THESE TERMS. IF YOU DO NOT AGREE TO ALL THESE TERMS, PROMPTLY RETURN THE UNUSED SOFTWARE TO YOUR PLACE OF PURCHASE FOR A REFUND.
1. CHINESE GOVERNMENT RESTRICTED. If Software is being acquired by or on behalf of the Chinese Government , then the Government's rights in Software and accompanying documentation will be only as set forth in this Agreement.
2. GOVERNING LAW. Any action related to this Agreement will be governed by Chinese law: "COPYRIGHT LAW OF THE PEOPLE'S REPUBLIC OF CHINA","PATENT LAW OF THE PEOPLE'S REPUBLIC OF CHINA","TRADEMARK LAW OF THE PEOPLE'S REPUBLIC OF CHINA","COMPUTER SOFTWARE PROTECTION REGULATIONS OF THE PEOPLE'S REPUBLIC OF CHINA". No choice of law rules of any jurisdiction will apply."
*********************************************************************************
Do you accept the above licence agreement ([Y,y]/[N,n])? y
*********************************************************************************
Welcome to install GBase products
*********************************************************************************
Environmental Checking on gcluster nodes.
CoordinateHost:
192.168.142.11
DataHost:
192.168.142.11
Are you sure to install GCluster on these nodes ([Y,y]/[N,n])? y
192.168.142.11 Start install on host 192.168.142.11
192.168.142.10 Start install on host 192.168.142.10
192.168.142.11 mkdir /opt_prepare on host 192.168.142.11.
192.168.142.10 mkdir /opt_prepare on host 192.168.142.10.
192.168.142.11 Copying InstallTar.py to host 192.168.142.11:/opt_prepare
192.168.142.10 Copying InstallTar.py to host 192.168.142.10:/opt_prepare
192.168.142.11 Copying InstallFuns.py to host 192.168.142.11:/opt_prepare
192.168.142.10 Copying rmt.py to host 192.168.142.10:/opt_prepare
192.168.142.11 Copying SSHThread.py to host 192.168.142.11:/opt_prepare
192.168.142.10 Copying SSHThread.py to host 192.168.142.10:/opt_prepare
192.168.142.11 Copying RestoreLocal.py to host 192.168.142.11:/opt_prepare
192.168.142.10 Copying RestoreLocal.py to host 192.168.142.10:/opt_prepare
192.168.142.11 Copying pexpect.py to host 192.168.142.11:/opt_prepare
192.168.142.10 Copying pexpect.py to host 192.168.142.10:/opt_prepare
192.168.142.11 Copying bundle.tar.bz2 to host 192.168.142.11:/opt_prepare
192.168.142.10 Updating corosync configure files.
192.168.142.11 Copying bundle.tar.bz2 to host 192.168.142.11:/opt_prepare
192.168.142.10 Updating corosync configure files.
192.168.142.11 Copying bundle_data.tar.bz2 to host 192.168.142.11:/opt_prepare
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.11 Installing gcluster.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.11 Installing gcluster.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.11 Installing gcluster.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.11 Installing gcluster.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.11 Installing gcluster.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.11 Installing gcluster.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.11 Installing gcluster.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.11 Installing gcluster.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.11 Installing gcluster.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.11 Installing gcluster.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.11 Installing gcluster.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.11 Installing gcluster.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.11 Installing gcluster.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.11 Installing gcluster.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.11 Installing gcluster.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.11 Installing gcluster.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.11 Install gcluster on host 192.168.142.11 successfully.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
Update and sync configuration file...
Starting all gcluster nodes...
Sync coordinator system tables...
check database password ...
check database password successful
check rsync command status
use rsync command sync metadata
Adding new datanodes to gcware...
ExtendCluster Successfully
[gbase@localhost gcinstall]$ gcadmin
CLUSTER STATE: ACTIVE
CLUSTER MODE: NORMAL
=====================================================================
| GBASE COORDINATOR CLUSTER INFORMATION |
=====================================================================
| NodeName | IpAddress |gcware |gcluster |DataState |
---------------------------------------------------------------------
| coordinator1 | 192.168.142.10 | OPEN | OPEN | 0 |
---------------------------------------------------------------------
| coordinator2 | 192.168.142.11 | OPEN | OPEN | 0 |
---------------------------------------------------------------------
=================================================================
| GBASE DATA CLUSTER INFORMATION |
=================================================================
|NodeName | IpAddress |gnode |syncserver |DataState |
-----------------------------------------------------------------
| node1 | 192.168.142.10 | OPEN | OPEN | 0 |
-----------------------------------------------------------------
| node2 | 192.168.142.11 | OPEN | OPEN | 0 |
-----------------------------------------------------------------
[gbase@localhost gcinstall]$ gcadmin showdistribution
Distribution ID: 1 | State: new | Total segment num: 1
Primary Segment Node IP Segment ID Duplicate Segment node IP
========================================================================================================================
| 192.168.142.10 | 1 | |
========================================================================================================================
我们可以看到管理节点已经添加成功,数据节点还没有添加分布策略,原来的分布策略ID为1。
[gbase@localhost gcinstall]$ cat gcChangeInfo.xml
<?xml version="1.0" encoding="utf-8"?>
<servers>
<rack>
<node ip="192.168.142.11"/>
<node ip="192.168.142.10"/>
</rack>
</servers>
[gbase@localhost gcinstall]$ gcadmin distribution gcChangeInfo.xml p 1 d 0
gcadmin generate distribution ...
[warning]: parameter [d num] is 0, the new distribution will has no segment backup
please ensure this is ok, input y or n: y
NOTE: node [192.168.142.11] is coordinator node, it shall be data node too
copy system table from 192.168.142.10 to 192.168.142.11
source ip: 192.168.142.10
target ip: 192.168.142.11
gcadmin generate distribution successful
[gbase@localhost gcinstall]$ gcadmin showdistribution
Distribution ID: 2 | State: new | Total segment num: 2
Primary Segment Node IP Segment ID Duplicate Segment node IP
========================================================================================================================
| 192.168.142.11 | 1 | |
------------------------------------------------------------------------------------------------------------------------
| 192.168.142.10 | 2 | |
========================================================================================================================
Distribution ID: 1 | State: old | Total segment num: 1
Primary Segment Node IP Segment ID Duplicate Segment node IP
========================================================================================================================
| 192.168.142.10 | 1 | |
========================================================================================================================
我们可以看到现在有两个分布策略,后续我们删除老的分布策略。
[gbase@localhost gcinstall]$ gccli
GBase client 8.6.2.43-R33.132743. Copyright (c) 2004-2022, GBase. All Rights Reserved.
gbase> initnodedatamap;
Query OK, 0 rows affected, 1 warning (Elapsed: 00:00:00.76)
如果有需要调整优先级,做此操作。
gbase> set global gcluster_rebalancing_concurrent_count = 0;
Query OK, 0 rows affected, 1 warning (Elapsed: 00:00:00.01)
重分布支持instance, database、table三种等级。
gbase> rebalance instance;
Query OK, 1 row affected (Elapsed: 00:00:00.25)
根据现场实际情况来调整,也可以不调整。
gbase> select * from gclusterdb.rebalancing_status;
+------------+---------+------------+----------+----------------------------+----------+----------+------------+----------+------+-----------------+
| index_name | db_name | table_name | tmptable | start_time | end_time | status | percentage | priority | host | distribution_id |
+------------+---------+------------+----------+----------------------------+----------+----------+------------+----------+------+-----------------+
| czg.czg | czg | czg | NULL | 2022-08-12 10:04:58.762000 | NULL | STARTING | 0 | 5 | NULL | 2 |
+------------+---------+------------+----------+----------------------------+----------+----------+------------+----------+------+-----------------+
1 row in set (Elapsed: 00:00:00.01)
priority值小的,优先级高,改完之后,需加大并行度。
gbase> update gclusterdb.rebalancing_status set priority = 3 where index_name like 'czg.czg';
Query OK, 1 row affected (Elapsed: 00:00:00.18)
Rows matched: 1 Changed: 1 Warnings: 0
gbase> select * from gclusterdb.rebalancing_status;
+------------+---------+------------+----------+----------------------------+----------+----------+------------+----------+------+-----------------+
| index_name | db_name | table_name | tmptable | start_time | end_time | status | percentage | priority | host | distribution_id |
+------------+---------+------------+----------+----------------------------+----------+----------+------------+----------+------+-----------------+
| czg.czg | czg | czg | NULL | 2022-08-12 10:04:58.762000 | NULL | STARTING | 0 | 3 | NULL | 2 |
+------------+---------+------------+----------+----------------------------+----------+----------+------------+----------+------+-----------------+
1 row in set (Elapsed: 00:00:00.01)
gbase> set global gcluster_rebalancing_concurrent_count = 5;
Query OK, 0 rows affected, 1 warning (Elapsed: 00:00:00.00)
gbase> select * from gclusterdb.rebalancing_status;
+------------+---------+------------+----------+----------------------------+----------------------------+-----------+------------+----------+-----------------------+-----------------+
| index_name | db_name | table_name | tmptable | start_time | end_time | status | percentage | priority | host | distribution_id |
+------------+---------+------------+----------+----------------------------+----------------------------+-----------+------------+----------+-----------------------+-----------------+
| czg.czg | czg | czg | | 2022-08-12 10:33:59.337000 | 2022-08-12 10:33:59.654000 | COMPLETED | 100 | 3 | ::ffff:192.168.142.10 | 2 |
+------------+---------+------------+----------+----------------------------+----------------------------+-----------+------------+----------+-----------------------+-----------------+
1 row in set (Elapsed: 00:00:00.01)
gbase> select * from gbase.table_distribution where data_distribution_id=1;
Empty set (Elapsed: 00:00:00.01)
如果有的话,请使用rebalance table 库名.表名,重新分布表。
gbase> refreshnodedatamap drop 1;
Query OK, 0 rows affected, 1 warning (Elapsed: 00:00:01.00)
[gbase@localhost gcinstall]$ gcadmin rmdistribution 1
cluster distribution ID [1]
it will be removed now
please ensure this is ok, input y or n: y
gcadmin remove distribution [1] success
各节点状态正常,分布策略是最新的,说明我们扩容成功啦。
[gbase@localhost gcinstall]$ gcadmin
CLUSTER STATE: ACTIVE
CLUSTER MODE: NORMAL
=====================================================================
| GBASE COORDINATOR CLUSTER INFORMATION |
=====================================================================
| NodeName | IpAddress |gcware |gcluster |DataState |
---------------------------------------------------------------------
| coordinator1 | 192.168.142.10 | OPEN | OPEN | 0 |
---------------------------------------------------------------------
| coordinator2 | 192.168.142.11 | OPEN | OPEN | 0 |
---------------------------------------------------------------------
=================================================================
| GBASE DATA CLUSTER INFORMATION |
=================================================================
|NodeName | IpAddress |gnode |syncserver |DataState |
-----------------------------------------------------------------
| node1 | 192.168.142.10 | OPEN | OPEN | 0 |
-----------------------------------------------------------------
| node2 | 192.168.142.11 | OPEN | OPEN | 0 |
-----------------------------------------------------------------
[gbase@localhost gcinstall]$ gcadmin showdistribution
Distribution ID: 2 | State: new | Total segment num: 2
Primary Segment Node IP Segment ID Duplicate Segment node IP
========================================================================================================================
| 192.168.142.11 | 1 | |
------------------------------------------------------------------------------------------------------------------------
| 192.168.142.10 | 2 | |
========================================================================================================================
之前我是两节点扩展到三节点,每个节点是3G,2个逻辑核,扩容出现的问题。
[root@localhost ~]# su - gbase
上一次登录:四 8月 11 17:29:07 CST 2022pts/6 上
[gbase@localhost gcinstall]$ ./gcinstall.py --silent=demo.options
*********************************************************************************
Thank you for choosing GBase product!
Please read carefully the following licencing agreement before installing GBase product:
TIANJIN GENERAL DATA TECHNOLOGY CO., LTD. LICENSE AGREEMENT
READ THE TERMS OF THIS AGREEMENT AND ANY PROVIDED SUPPLEMENTAL LICENSETERMS (COLLECTIVELY "AGREEMENT") CAREFULLY BEFORE OPENING THE SOFTWAREMEDIA PACKAGE. BY OPENING THE SOFTWARE MEDIA PACKAGE, YOU AGREE TO THE TERMS OF THIS AGREEMENT. IF YOU ARE ACCESSING THE SOFTWARE ELECTRONICALLY, INDICATE YOUR ACCEPTANCE OF THESE TERMS. IF YOU DO NOT AGREE TO ALL THESE TERMS, PROMPTLY RETURN THE UNUSED SOFTWARE TO YOUR PLACE OF PURCHASE FOR A REFUND.
1. CHINESE GOVERNMENT RESTRICTED. If Software is being acquired by or on behalf of the Chinese Government , then the Government's rights in Software and accompanying documentation will be only as set forth in this Agreement.
2. GOVERNING LAW. Any action related to this Agreement will be governed by Chinese law: "COPYRIGHT LAW OF THE PEOPLE'S REPUBLIC OF CHINA","PATENT LAW OF THE PEOPLE'S REPUBLIC OF CHINA","TRADEMARK LAW OF THE PEOPLE'S REPUBLIC OF CHINA","COMPUTER SOFTWARE PROTECTION REGULATIONS OF THE PEOPLE'S REPUBLIC OF CHINA". No choice of law rules of any jurisdiction will apply."
*********************************************************************************
Do you accept the above licence agreement ([Y,y]/[N,n])? y
*********************************************************************************
Welcome to install GBase products
*********************************************************************************
Environmental Checking on gcluster nodes.
CoordinateHost:
192.168.142.12
DataHost:
192.168.142.12
Are you sure to install GCluster on these nodes ([Y,y]/[N,n])? y
192.168.142.12 Start install on host 192.168.142.12
192.168.142.11 Start install on host 192.168.142.11
192.168.142.10 Start install on host 192.168.142.10
192.168.142.12 mkdir /opt_prepare on host 192.168.142.12.
192.168.142.11 mkdir /opt_prepare on host 192.168.142.11.
192.168.142.10 mkdir /opt_prepare on host 192.168.142.10.
192.168.142.12 Copying InstallTar.py to host 192.168.142.12:/opt_prepare
192.168.142.11 Copying InstallTar.py to host 192.168.142.11:/opt_prepare
192.168.142.10 Copying InstallTar.py to host 192.168.142.10:/opt_prepare
192.168.142.12 Copying InstallFuns.py to host 192.168.142.12:/opt_prepare
192.168.142.11 Copying InstallFuns.py to host 192.168.142.11:/opt_prepare
192.168.142.10 Copying InstallFuns.py to host 192.168.142.10:/opt_prepare
192.168.142.12 Copying rmt.py to host 192.168.142.12:/opt_prepare
192.168.142.11 Copying rmt.py to host 192.168.142.11:/opt_prepare
192.168.142.10 Copying rmt.py to host 192.168.142.10:/opt_prepare
192.168.142.12 Copying SSHThread.py to host 192.168.142.12:/opt_prepare
192.168.142.11 Copying SSHThread.py to host 192.168.142.11:/opt_prepare
192.168.142.10 Copying SSHThread.py to host 192.168.142.10:/opt_prepare
192.168.142.12 Copying RestoreLocal.py to host 192.168.142.12:/opt_prepare
192.168.142.11 Copying RestoreLocal.py to host 192.168.142.11:/opt_prepare
192.168.142.10 Copying RestoreLocal.py to host 192.168.142.10:/opt_prepare
192.168.142.12 Copying pexpect.py to host 192.168.142.12:/opt_prepare
192.168.142.11 Copying pexpect.py to host 192.168.142.11:/opt_prepare
192.168.142.10 Copying pexpect.py to host 192.168.142.10:/opt_prepare
192.168.142.12 Copying BUILDINFO to host 192.168.142.12:/opt_prepare
192.168.142.11 Copying BUILDINFO to host 192.168.142.11:/opt_prepare
192.168.142.10 Copying BUILDINFO to host 192.168.142.10:/opt_prepare
192.168.142.12 Copying bundle.tar.bz2 to host 192.168.142.12:/opt_prepare
192.168.142.11 Updating corosync configure files.
192.168.142.10 Updating corosync configure files.
192.168.142.12 Copying bundle.tar.bz2 to host 192.168.142.12:/opt_prepare
192.168.142.11 Updating corosync configure files.
192.168.142.10 Updating corosync configure files.
192.168.142.12 Copying bundle_data.tar.bz2 to host 192.168.142.12:/opt_prepare
192.168.142.11 Install gcluster on host 192.168.142.11 successfully.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.12 Installing gcluster.
192.168.142.11 Install gcluster on host 192.168.142.11 successfully.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.12 Installing gcluster.
192.168.142.11 Install gcluster on host 192.168.142.11 successfully.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.12 Installing gcluster.
192.168.142.11 Install gcluster on host 192.168.142.11 successfully.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.12 Installing gcluster.
192.168.142.11 Install gcluster on host 192.168.142.11 successfully.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.12 Installing gcluster.
192.168.142.11 Install gcluster on host 192.168.142.11 successfully.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.12 Installing gcluster.
192.168.142.11 Install gcluster on host 192.168.142.11 successfully.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.12 Installing gcluster.
192.168.142.11 Install gcluster on host 192.168.142.11 successfully.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.12 Installing gcluster.
192.168.142.11 Install gcluster on host 192.168.142.11 successfully.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.12 Installing gcluster.
192.168.142.11 Install gcluster on host 192.168.142.11 successfully.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.12 Installing gcluster.
192.168.142.11 Install gcluster on host 192.168.142.11 successfully.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.12 Installing gcluster.
192.168.142.11 Install gcluster on host 192.168.142.11 successfully.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.12 Installing gcluster.
192.168.142.11 Install gcluster on host 192.168.142.11 successfully.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.12 Install gcluster on host 192.168.142.12 successfully.
192.168.142.11 Install gcluster on host 192.168.142.11 successfully.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
192.168.142.12 Install gcluster on host 192.168.142.12 successfully.
192.168.142.11 Install gcluster on host 192.168.142.11 successfully.
192.168.142.10 Install gcluster on host 192.168.142.10 successfully.
Update and sync configuration file...
Starting all gcluster nodes...
Sync coordinator system tables...
check database password ...
这一直卡住,没有显示报错,看后台安装日志没有报错,在尝试登录数据库。
[root@localhost ~]# tail -f /opt/pkg/gcinstall/gcinstall.log
2022-08-11 17:28:52,297-root-DEBUG rm -f /opt/pkg/gcinstall/corosync.conf192.168.142.12
2022-08-11 17:28:52,812-root-INFO sync corosync conf successfully.
2022-08-11 17:28:52,812-root-DEBUG Starting all gcluster nodes...
2022-08-11 17:28:59,753-root-INFO start service successfull on host 192.168.142.12.
2022-08-11 17:29:08,441-root-INFO start service successfull on host 192.168.142.10.
2022-08-11 17:29:10,182-root-INFO start service successfull on host 192.168.142.11.
2022-08-11 17:29:10,686-root-DEBUG /bin/chown -R gbase:gbase gcChangeInfo.xml
2022-08-11 17:29:10,730-root-DEBUG Sync coordinator system tables...
2022-08-11 17:29:10,730-root-INFO check database password ...
2022-08-11 17:29:10,730-root-INFO gccli -uroot -p'***' -e'use gbase'
查看/opt/gcluster/log/gcluster/express.log 日志,提示调用gcClmClusterTrack函数错误。
[root@localhost ~]# tail -f /opt/gcluster/log/gcluster/express.log
2022-08-11 17:51:43.319 [ERROR] <HAEventHandler::HaEventMonitorThreadProc>: HA event monitor thread call gcClmClusterTrack function fail
2022-08-11 17:51:44.322 [ERROR] <HAEventHandler::HaEventMonitorThreadProc>: HA event monitor thread call gcClmClusterTrack function fail
2022-08-11 17:51:45.325 [ERROR] <HAEventHandler::HaEventMonitorThreadProc>: HA event monitor thread call gcClmClusterTrack function fail
2022-08-11 17:51:46.330 [ERROR] <HAEventHandler::HaEventMonitorThreadProc>: HA event monitor thread call gcClmClusterTrack function fail
2022-08-11 17:51:47.333 [ERROR] <HAEventHandler::HaEventMonitorThreadProc>: HA event monitor thread call gcClmClusterTrack function fail
2022-08-11 17:51:48.335 [ERROR] <HAEventHandler::HaEventMonitorThreadProc>: HA event monitor thread call gcClmClusterTrack function fail
2022-08-11 17:51:49.339 [ERROR] <HAEventHandler::HaEventMonitorThreadProc>: HA event monitor thread call gcClmClusterTrack function fail
2022-08-11 17:51:50.341 [ERROR] <HAEventHandler::HaEventMonitorThreadProc>: HA event monitor thread call gcClmClusterTrack function fail
2022-08-11 17:51:51.346 [ERROR] <HAEventHandler::HaEventMonitorThreadProc>: HA event monitor thread call gcClmClusterTrack function fail