隨著業務的發展,業務系統對數據庫的架構要求也在變化,比如需要讀負載均衡、機房搬遷、服務器硬件替換等等,這需要在原數據庫主備架構的基礎上進行擴/縮容操作,目前MogDB數據庫安裝方式有三種,分別是手工安裝(非om)、標準安裝(om)和PTK安裝。
- 手工安裝數據庫集群擴縮容,適用MogDB/openGauss數據庫集群,需要初始化新節點,修改參數文件,build添加備庫或者直接刪除備庫節點即可
- 標準安裝數據庫集群擴縮容,適用MogDB/openGauss數據庫集群,需要修改xml配置文件,借助gs_expansion/gs_dropnode工具進行操作,不可直接添加/刪除節點
- PTK安裝數據庫集群擴縮容,僅支持MogDB數據庫集群,PTK0.3版本開始支持,使用ptk cluster scale-out/scale-in -h 可以快速方便的完成擴縮容
工具介紹
PTK
gs_expansion
gs_expansion工具對數據庫的備機進行擴容。支持從單機或者一主多備最多擴容到一主八備(包括級聯備)。
注意事項
- 擴容后不會自動更新synchronous_standby_names參數。如果需要為該參數增加擴容的機器,請在擴容完成后手動更新。
- 擴容級聯備之前要確保原集群中有處于同一AZ(Available Zone)且狀態正常的備機,或擴容級聯備的同時也擴容了處于同AZ的備機。
gs_dropnode
gs_dropnode工具從一主多備的數據庫中移除不需要的備機,最多可以刪除到只剩下單機。
注意事項
- 僅支持使用om方式安裝的主備數據庫實例中移除備機,不支持使用編譯方式安裝組建的主備數據庫。
- 從主備數據庫實例中移除當前仍可連通的備機時,會自動停止目標備機上正在運行的數據庫服務,并刪除備機上的GRPC證書(證書位置:$GAUSSHOME/share/sslcert/grpc/),但是不會刪除備機上的應用。
- 如果目標備機在執行操作前處于不可連通的狀態,需要用戶在目標備機恢復后手動停止或刪除目標備機的數據庫服務,并刪除備機上的GRPC證書。
- 如果刪除后數據庫實例中只剩下一個主機時,會提示建議重啟當前主機,此時建議用戶根據當前業務運行環境重啟主機。
- 當移除的備機處于同步復制模式時,如果執行刪除命令的同時主機上存在事務操作,事務提交時會出現短暫卡頓,刪除完成后事務處理可繼續。
- 當目標備機被移除后,如果需要以備機方式使用目標備機,請參考gs_expansion命令重新將目標備機添加到集群中。
- 當目標備機被移除后,如果不再需要目標備機,請在目標備機上使用gs_uninstall -delete-data -L命令單點卸載,請注意務必添加-L選項。
- 當目標備機被移除后,如果暫時不確定是否需要目標備機,可以選擇如下方法拒絕從目標備機的遠程ssh連接,避免在目標備機上的誤操作。
- 方式一:在當前主機上使用root用戶修改/etc/ssh/sshd_config文件,添加如下記錄(如果已存在DenyUsers記錄,請在后面追加)DenyUsers omm@10.11.12.13,修改后需要重啟ssh服務使其生效,修改后限制從目標備機不能使用omm用戶遠程到該主機。
- 方式二:在當前主機上將目標備機加入到/etc/hosts.deny文件中(例如:sshd:10.11.12.13:deny),拒絕從目標備機的遠程ssh連接(對所有用戶生效),此方法需要系統sshd服務綁定到libwrap庫。
- 當目標備機被移除后,如果需要以單機方式使用目標備機且無需保留原數據,請先執行gs_uninstall -delete-data -L命令卸載后重新安裝。如果保留原數據,請在目標備機上先執行gs_guc set -D /gaussdb/data/dbnode -c “replconninfoX“ ,
- /gaussdb/data/dbnode 表示數據目錄,
- replconninfoX 表示主備集群中的除本節點外的其他節點,
比如一主一備則需要配置 replconninfo1, 一主兩備需要配置 replconninfo1 和 replconninfo2, 以此類推
示例
環境檢查
集群狀態
[omm@node1 ~]$ gs_om -t status --detail
[ Cluster State ]
cluster_state : Normal
redistributing : No
current_az : AZ_ALL
[ Datanode State ]
node node_ip port instance state
----------------------------------------------------------------------------
1 node1 192.168.122.221 25000 6001 /data/mogdb P Primary Normal
2 node2 192.168.122.157 25000 6002 /data/mogdb S Standby Normal
xml配置文件
<?xml version="1.0" encoding="UTF-8"?>
<ROOT>
<CLUSTER>
<PARAM name="clusterName" value="dbCluster" />
<PARAM name="nodeNames" value="node1,node2" />
<PARAM name="backIp1s" value="192.168.122.221,192.168.122.157"/>
<PARAM name="gaussdbAppPath" value="/opt/mogdb/app" />
<PARAM name="gaussdbLogPath" value="/var/log/mogdb" />
<PARAM name="gaussdbToolPath" value="/opt/mogdb/tools" />
<PARAM name="corePath" value="/opt/mogdb/corefile"/>
<PARAM name="clusterType" value="single-inst"/>
</CLUSTER>
<DEVICELIST>
<DEVICE sn="1000001">
<PARAM name="name" value="node1"/>
<PARAM name="azName" value="AZ1"/>
<PARAM name="azPriority" value="1"/>
<PARAM name="backIp1" value="192.168.122.221"/>
<PARAM name="sshIp1" value="192.168.122.221"/>
<PARAM name="dataNum" value="1"/>
<PARAM name="dataPortBase" value="25000"/>
<PARAM name="dataNode1" value="/data/mogdb,node2,/data/mogdb"/>
</DEVICE>
<DEVICE sn="1000002">
<PARAM name="name" value="node2"/>
<PARAM name="azName" value="AZ1"/>
<PARAM name="azPriority" value="1"/>
<PARAM name="backIp1" value="192.168.122.157"/>
<PARAM name="sshIp1" value="192.168.122.157"/>
</DEVICE>
</DEVICELIST>
</ROOT>
擴容
前提條件
- 擴容備機的操作系統與主機保持一致。
- 在擴容備機上創建好與主機上相同的用戶和用戶組。
- 已存在的節點和新增節點之間建立好root用戶互信以及數據庫管理用戶(如omm)的互信。
- 正確配置xml文件,在已安裝數據庫配置文件的基礎上,添加需要擴容的備機信息。
- 擴容備節點的操作只能在主節點上執行,且只能使用root用戶在解壓MogDB鏡像包后的script目錄下執行gs_expansion命令。
- 執行擴容命令前需要通過source命令導入主機數據庫的環境變量。一般該文件路徑為:/home/[user]/.bashrc
- 不允許與gs_dropnode命令同時執行。
- 不允許并發執行相同的gs_expansion命令。
- 操作過程中不允許同時在其他備節點上執行主備倒換或者故障倒換的操作。
擴容節點準備
擴容節點:192.168.122.68
參考 操作系統配置
--創建omm用戶及用戶組
[root@node3 ~]# groupadd dbgrp
[root@node3 ~]# useradd -g dbgrp omm
[root@node3 ~]# passwd omm
--建立互信,第一次需要先相互登陸確認一下
[root@node2 ~]# scp -r .ssh root@192.168.122.68:/root
[omm@node2 ~]$ scp -r .ssh omm@192.168.122.68:/home/omm/
--python3 版本要保持一致,如果不一致需要重新安裝
配置xml文件
<?xml version="1.0" encoding="UTF-8"?>
<ROOT>
<CLUSTER>
<PARAM name="clusterName" value="dbCluster" />
<PARAM name="nodeNames" value="node1,node2,node3" />
<PARAM name="backIp1s" value="192.168.122.221,192.168.122.157,192.168.122.68"/>
<PARAM name="gaussdbAppPath" value="/opt/mogdb/app" />
<PARAM name="gaussdbLogPath" value="/var/log/mogdb" />
<PARAM name="gaussdbToolPath" value="/opt/mogdb/tools" />
<PARAM name="corePath" value="/opt/mogdb/corefile"/>
<PARAM name="clusterType" value="single-inst"/>
</CLUSTER>
<DEVICELIST>
<DEVICE sn="1000001">
<PARAM name="name" value="node1"/>
<PARAM name="azName" value="AZ1"/>
<PARAM name="azPriority" value="1"/>
<PARAM name="backIp1" value="192.168.122.221"/>
<PARAM name="sshIp1" value="192.168.122.221"/>
<PARAM name="dataNum" value="1"/>
<PARAM name="dataPortBase" value="25000"/>
<PARAM name="dataNode1" value="/data/mogdb,node2,/data/mogdb,node3,/data/mogdb"/>
</DEVICE>
<DEVICE sn="1000002">
<PARAM name="name" value="node2"/>
<PARAM name="azName" value="AZ1"/>
<PARAM name="azPriority" value="1"/>
<PARAM name="backIp1" value="192.168.122.157"/>
<PARAM name="sshIp1" value="192.168.122.157"/>
</DEVICE>
<DEVICE sn="1000003">
<PARAM name="name" value="node3"/>
<PARAM name="azName" value="AZ1"/>
<PARAM name="azPriority" value="1"/>
<PARAM name="backIp1" value="192.168.122.68"/>
<PARAM name="sshIp1" value="192.168.122.68"/>
</DEVICE>
</DEVICELIST>
</ROOT>
集群擴容
[root@node1 ~]# cd /opt/mogdb300
[root@node1 mogdb300]# source /home/omm/.bashrc
[root@node1 mogdb300]# ./script/gs_expansion -U omm -G dbgrp -X /opt/mogdb300/config.xml -h 192.168.122.68
Start expansion without cluster manager component.
Start to preinstall database on new nodes.
Start to send soft to each standby nodes.
End to send soft to each standby nodes.
Start to preinstall database step.
Preinstall 192.168.122.68 success
End to preinstall database step.
End to preinstall database on new nodes.
Start to install database on new nodes.
Installing database on node 192.168.122.68:
Parsing the configuration file.
Check preinstall on every node.
Successfully checked preinstall on every node.
Creating the backup directory.
Successfully created the backup directory.
begin deploy..
Installing the cluster.
begin prepare Install Cluster..
Checking the installation environment on all nodes.
begin install Cluster..
Installing applications on all nodes.
Successfully installed APP.
begin init Instance..
encrypt cipher and rand files for database.
Please enter password for database:
Please repeat for database:
begin to create CA cert files
The sslcert will be generated in /opt/mogdb/app/share/sslcert/om
NO cm_server instance, no need to create CA for CM.
Cluster installation is completed.
Configuring.
Deleting instances from all nodes.
Successfully deleted instances from all nodes.
Checking node configuration on all nodes.
Initializing instances on all nodes.
Updating instance configuration on all nodes.
Check consistence of memCheck and coresCheck on database nodes.
Configuring pg_hba on all nodes.
Configuration is completed.
Successfully started cluster.
Successfully installed application.
end deploy..
192.168.122.68 install success.
Finish to install database on all nodes.
Database on standby nodes installed finished.
Checking mogdb and gs_om version.
End to check mogdb and gs_om version.
Start to establish the relationship.
Start to build standby 192.168.122.68.
Build standby 192.168.122.68 success.
Start to generate and send cluster static file.
End to generate and send cluster static file.
Expansion results:
192.168.122.68: Success
Expansion Finish.
擴容驗證
--主節點查詢
[root@node1 mogdb300]# su - omm
[omm@node1 ~]$ gs_om -t status --detail
[ Cluster State ]
cluster_state : Normal
redistributing : No
current_az : AZ_ALL
[ Datanode State ]
node node_ip port instance state
----------------------------------------------------------------------------
1 node1 192.168.122.221 25000 6001 /data/mogdb P Primary Normal
2 node2 192.168.122.157 25000 6002 /data/mogdb S Standby Normal
3 node3 192.168.122.68 25000 6003 /data/mogdb S Standby Normal
--擴容節點查詢
[root@node3 ~]# su - omm
Last login: Fri Aug 5 10:16:20 HKT 2022 from node1 on pts/1
[omm@node3 ~]$ gs_ctl query -D /data/mogdb
[2022-08-05 10:24:17.047][17791][][gs_ctl]: gs_ctl query ,datadir is /data/mogdb
HA state:
local_role : Standby
static_connections : 2
db_state : Normal
detail_information : Normal
Senders info:
No information
Receiver info:
receiver_pid : 6141
local_role : Standby
peer_role : Primary
peer_state : Normal
state : Normal
sender_sent_location : 0/6000808
sender_write_location : 0/6000808
sender_flush_location : 0/6000808
sender_replay_location : 0/6000808
receiver_received_location : 0/6000808
receiver_write_location : 0/6000808
receiver_flush_location : 0/6000808
receiver_replay_location : 0/6000808
sync_percent : 100%
channel : 192.168.122.68:44046<--192.168.122.221:25001
[omm@node3 ~]$
縮容
前提條件
- 執行前需要確保主節點和備節點之間omm用戶(數據庫管理用戶)的互信正常。
- 刪除備節點的操作只能在主節點上執行,需要使用數據庫管理用戶(比如omm)執行該命令。
- 不允許與gs_expansion命令同時執行。
- 不允許并發執行相同的gs_dropnode命令。
- 不允許同時在其他備節點上執行主備倒換或者故障倒換的操作。
- 執行命令前需要通過source命令導入主機數據庫的環境變量。如果當前數據庫是分離環境變量方式安裝,則source導入分離的環境變量。如果未進行分離,則需要source導入子用戶的.bashrc配置文件。一般該文件路徑為:/home/[user]/.bashrc
集群縮容
將新擴容節點當目標備庫再刪除掉,為了防止誤操作,需要刪除目標備庫與原集群內其他節點的ssh互信,操作方式參考注意事項
--主庫執行
[omm@node1 ~]$ gs_om -t status --detail
[ Cluster State ]
cluster_state : Normal
redistributing : No
current_az : AZ_ALL
[ Datanode State ]
node node_ip port instance state
----------------------------------------------------------------------------
1 node1 192.168.122.221 25000 6001 /data/mogdb P Primary Normal
2 node2 192.168.122.157 25000 6002 /data/mogdb S Standby Normal
3 node3 192.168.122.68 25000 6003 /data/mogdb S Standby Normal
[omm@node1 ~]$ gs_dropnode -U omm -G dbgrp -h 192.168.122.68
The target node to be dropped is (['node3'])
Do you want to continue to drop the target node (yes/no)?yes
Drop node start without CM node.
[gs_dropnode]Start to drop nodes of the cluster.
[gs_dropnode]Start to stop the target node node3.
[gs_dropnode]End of stop the target node node3.
[gs_dropnode]Start to backup parameter config file on node1.
[gs_dropnode]End to backup parameter config file on node1.
[gs_dropnode]The backup file of node1 is /opt/mogdb/tools/omm_mppdb/gs_dropnode_backup20220805102606/parameter_node1.tar
[gs_dropnode]Start to parse parameter config file on node1.
[gs_dropnode]End to parse parameter config file on node1.
[gs_dropnode]Start to parse backup parameter config file on node1.
[gs_dropnode]End to parse backup parameter config file node1.
[gs_dropnode]Start to set openGauss config file on node1.
[gs_dropnode]End of set openGauss config file on node1.
[gs_dropnode]Start to backup parameter config file on node2.
[gs_dropnode]End to backup parameter config file on node2.
[gs_dropnode]The backup file of node2 is /opt/mogdb/tools/omm_mppdb/gs_dropnode_backup20220805102607/parameter_node2.tar
[gs_dropnode]Start to parse parameter config file on node2.
[gs_dropnode]End to parse parameter config file on node2.
[gs_dropnode]Start to parse backup parameter config file on node2.
[gs_dropnode]End to parse backup parameter config file node2.
[gs_dropnode]Start to set openGauss config file on node2.
[gs_dropnode]End of set openGauss config file on node2.
[gs_dropnode]Start of set pg_hba config file on node1.
[gs_dropnode]End of set pg_hba config file on node1.
[gs_dropnode]Start of set pg_hba config file on node2.
[gs_dropnode]End of set pg_hba config file on node2.
[gs_dropnode]Start to set repl slot on node1.
[gs_dropnode]Start to get repl slot on node1.
[gs_dropnode]End of set repl slot on node1.
[gs_dropnode]Start to modify the cluster static conf.
[gs_dropnode]End of modify the cluster static conf.
[gs_dropnode]Success to drop the target nodes.
[omm@node1 ~]$ gs_om -t status --detail
[ Cluster State ]
cluster_state : Normal
redistributing : No
current_az : AZ_ALL
[ Datanode State ]
node node_ip port instance state
----------------------------------------------------------------------------
1 node1 192.168.122.221 25000 6001 /data/mogdb P Primary Normal
2 node2 192.168.122.157 25000 6002 /data/mogdb S Standby Normal
[omm@node1 ~]$
目標備庫單獨服務(可選)
通過gs_dropnode工具已經將node3節點從集群內移除,并關閉了數據庫實例,但是數據目錄依然保留,而且數據庫配置文件中replconninfo信息也沒有清理。
--狀態檢查
[omm@node3 ~]$ gs_ctl query -D /data/mogdb
[2022-08-05 10:27:53.100][24663][][gs_ctl]: gs_ctl query ,datadir is /data/mogdb
[2022-08-05 10:27:53.100][24663][][gs_ctl]: PID file "/data/mogdb/postmaster.pid" does not exist
[2022-08-05 10:27:53.100][24663][][gs_ctl]: Is server running?
[omm@node3 ~]$ cat /data/mogdb/postgresql.conf |grep -i replconninfo
replconninfo1 = 'localhost=192.168.122.68 localport=25001 localheartbeatport=25003 localservice=25004 remotehost=192.168.122.157 remoteport=25001 remoteheartbeatport=25003 remoteservice=25004'
replconninfo2 = 'localhost=192.168.122.68 localport=25001 localheartbeatport=25003 localservice=25004 remotehost=192.168.122.221 remoteport=25001 remoteheartbeatport=25003 remoteservice=25004'
--注釋復制信息
[omm@node3 ~]$ gs_guc set -D /data/mogdb/ -c "replconninfo1"
[omm@node3 ~]$ gs_guc set -D /data/mogdb/ -c "replconninfo2"
[omm@node3 ~]$ cat /data/mogdb/postgresql.conf |grep -i replconninfo
#replconninfo1 = 'localhost=192.168.122.68 localport=25001 localheartbeatport=25003 localservice=25004 remotehost=192.168.122.157 remoteport=25001 remoteheartbeatport=25003 remoteservice=25004'
#replconninfo2 = 'localhost=192.168.122.68 localport=25001 localheartbeatport=25003 localservice=25004 remotehost=192.168.122.221 remoteport=25001 remoteheartbeatport=25003 remoteservice=25004'
--啟動數據庫
[omm@node3 ~]$ gs_ctl -D /data/mogdb start
[omm@node3 ~]$ gsql -p 25000 postgres -r
gsql ((MogDB 3.0.0 build 62408a0f) compiled at 2022-06-30 14:21:11 commit 0 last mr )
Non-SSL connection (SSL connection is recommended when requiring high-security)
Type "help" for help.
MogDB=# select pg_is_in_recovery();
pg_is_in_recovery
-------------------
f
(1 row)
MogDB=#
目標備庫清理數據(可選)
[omm@node3 ~]$ ls /data/mogdb
backup_label.old gaussdb.state mot.conf pg_hba.conf pg_location pg_serial PG_VERSION postmaster.opts server.key
base global pg_clog pg_hba.conf.bak pg_logical pg_snapshots pg_xlog postmaster.pid server.key.cipher
build_completed.done gs_build.pid pg_csnlog pg_hba.conf.lock pg_multixact pg_stat_tmp postgresql.conf postmaster.pid.lock server.key.rand
cacert.pem gs_gazelle.conf pg_ctl.lock pg_ident.conf pg_notify pg_tblspc postgresql.conf.bak rewind_lable undo
full_backup_label gswlm_userinfo.cfg pg_errorinfo pg_llog pg_replslot pg_twophase postgresql.conf.lock server.crt
--刪除數據目錄
[omm@node3 ~]$ gs_uninstall --delete-data -L
Checking uninstallation.
Successfully checked uninstallation.
Stopping the cluster.
Successfully stopped the cluster.
Successfully deleted instances.
Uninstalling application.
Successfully uninstalled application.
Uninstallation succeeded.
[omm@node3 ~]$ ls /data/mogdb
[omm@node3 ~]$
PTK 安裝集群擴容
[root@node1 .ptk]# ptk cluster scale-out -h
Scale out a MogDB cluster
Usage:
ptk cluster scale-out [flags]
Examples:
ptk cluster -n CLUSTER_NAME scale-out -c add.yaml [--force] [--skip-check-distro] [--skip-check-os] [--skip-create-user]
Flags:
-c, --config string Scale config path
--default-guc Disable optimize guc config, use default value
--force If scale operation had failed or interruptted. you can use --force to scale again. it will clear the old dirty directory
--gen-template Generate a scale add template config
-h, --help help for scale-out
-n, --name string Cluster name
--skip-check-distro Skip check distro
--skip-check-os Skip check os
--skip-create-user Skip create user
-t, --timeout duration Opration timeout (default 10m0s)
Global Flags:
-f, --file string Specify a configuration file of cluster
--log-file string Specify a log output file
--log-format string Specify the log message format. Options: [text, json] (default "text")
--log-level string Specify the log level. Options: [debug, info, warning, error, panic] (default "info")
-v, --version Print version of ptk
檢查集群狀態
[root@node1 ~]# ptk cluster -n M30 status
[ Cluster State ]
database_version : MogDB-MogDB
cluster_name : M30
cluster_state : Normal
current_az : AZ_ALL
[ Datanode State ]
id | ip | port | user | instance | db_role | state
-------+-----------------+-------+------+----------+---------+---------
6001 | 192.168.122.221 | 25000 | omm | dn_6001 | primary | Normal
6002 | 192.168.122.157 | 25000 | omm | dn_6002 | standby | Normal
生成擴容配置文件
[root@node1 .ptk]# ptk cluster -n M30 scale-out --gen-template > add.yaml
[root@node1 .ptk]# cat add.yaml
- host: 192.168.122.68
db_port: 25000
role: standby
ssh_option:
host: 192.168.122.68
port: 22
user: root
password: "pTk6MDQ2Y2U0ZDE8QzxCPEU/RE8ycy1UZFpEZ0xSMU9PQzRZMkpoY2JuT0x2Z05FbG9pZDlBMm5hZlFEVzQ="
集群擴容
[root@node1 .ptk]# ptk cluster -n M30 scale-out -c add.yaml
scale [stage=preCheck]
INFO[2022-08-05T14:19:52.162] start check operating system
INFO[2022-08-05T14:19:52.633] prechecking dependent tools...
INFO[2022-08-05T14:19:52.932] platform: centos_7_64bit host=192.168.122.68
.
.
.
INFO[2022-08-05T14:20:25.432] reload 192.168.122.157 database by gs_ctl host=192.168.122.157
INFO[2022-08-05T14:20:25.504] set 192.168.122.68 postgresql.conf host=192.168.122.68
INFO[2022-08-05T14:20:25.582] generate static config to /opt/mogdb/app/bin/cluster_static_config host=192.168.122.68
INFO[2022-08-05T14:20:25.612] change /opt/mogdb/app/bin/cluster_static_config owner to omm host=192.168.122.68
INFO[2022-08-05T14:20:25.625] set 192.168.122.68 hba config host=192.168.122.68
INFO[2022-08-05T14:20:25.709] build 192.168.122.68 database by gs_ctl host=192.168.122.68
Scale success.
[root@node1 .ptk]# ptk cluster -n M30 status
[ Cluster State ]
database_version : MogDB-MogDB
cluster_name : M30
cluster_state : Normal
current_az : AZ_ALL
[ Datanode State ]
id | ip | port | user | instance | db_role | state
-------+-----------------+-------+------+----------+---------+---------
6001 | 192.168.122.221 | 25000 | omm | dn_6001 | primary | Normal
6002 | 192.168.122.157 | 25000 | omm | dn_6002 | standby | Normal
6003 | 192.168.122.68 | 25000 | omm | dn_6003 | standby | Normal
PTK 集群縮容
[root@node1 .ptk]# ptk cluster scale-in -h
Scale in a MogDB cluster
Usage:
ptk cluster scale-in [flags]
Examples:
ptk cluster -n CLUSTER_NAME scale-in -H 10.0.0.1 [--stop-db] [--clear-user] [--clear-dir] [--clear-env] [-t 120]
Flags:
--clear-dir Clear relevant dir
--clear-env Clear env value
--clear-user Clear user in delete hosts
-h, --help help for scale-in
-H, --host stringArray Scale delete hosts
-n, --name string Cluster name
--stop-db Stop the database
-t, --timeout duration Opration timeout (default 5m0s)
Global Flags:
-f, --file string Specify a configuration file of cluster
--log-file string Specify a log output file
--log-format string Specify the log message format. Options: [text, json] (default "text")
--log-level string Specify the log level. Options: [debug, info, warning, error, panic] (default "info")
-v, --version Print version of ptk
集群縮容
[root@node1 .ptk]# ptk cluster -n M30 scale-in -H 192.168.122.68 --stop-db
scale [stage=preCheck]
scale [stage=exec]
modify the instance[192.168.122.68]:/data/mogdb/postgres.conf replconninfo value
INFO[2022-08-05T14:41:46.280] reload 192.168.122.68 database by gs_ctl host=192.168.122.68
modify the instance[192.168.122.157]:/data/mogdb/postgres.conf replconninfo value
INFO[2022-08-05T14:41:46.385] reload 192.168.122.157 database by gs_ctl host=192.168.122.157
modify the instance[192.168.122.221]:/data/mogdb/postgres.conf replconninfo value
INFO[2022-08-05T14:41:46.458] reload 192.168.122.221 database by gs_ctl host=192.168.122.221
scale [stage=postExec]
Would you want delete directory(AppDir,DataDir,ToolDir,LogDir)?[Y|Yes](default=N) Y
Would you want delete the user?[Y|Yes](default=N) Y
Would you want clear the env?[Y|Yes](default=N) Y
INFO[2022-08-05T14:42:06.251] stop 192.168.122.68 database by gs_ctl host=192.168.122.68
INFO[2022-08-05T14:42:06.321] remove files /opt/mogdb/app,/data/mogdb,/opt/mogdb/tool,/opt/mogdb/log host=192.168.122.68
INFO[2022-08-05T14:42:06.587] remove user profiles host=192.168.122.68
INFO[2022-08-05T14:42:06.607] delete os user omm host=192.168.122.68
Scale success.
[root@node1 .ptk]# ptk cluster -n M30 status
[ Cluster State ]
database_version : MogDB-MogDB
cluster_name : M30
cluster_state : Normal
current_az : AZ_ALL
[ Datanode State ]
id | ip | port | user | instance | db_role | state
-------+-----------------+-------+------+----------+---------+---------
6001 | 192.168.122.221 | 25000 | omm | dn_6001 | primary | Normal
6002 | 192.168.122.157 | 25000 | omm | dn_6002 | standby | Normal
[root@node1 .ptk]#
「喜歡這篇文章,您的關注和贊賞是給作者最好的鼓勵」
關注作者
【版權聲明】本文為墨天輪用戶原創內容,轉載時必須標注文章的來源(墨天輪),文章鏈接,文章作者等基本信息,否則作者和墨天輪有權追究責任。如果您發現墨天輪中有涉嫌抄襲或者侵權的內容,歡迎發送郵件至:contact@modb.pro進行舉報,并提供相關證據,一經查實,墨天輪將立刻刪除相關內容。




