前言
最近在做 TiDB 的恢復(fù)演練,需要在單臺(tái) Linux 服務(wù)器上部署一套 TiDB 最小的完整拓?fù)涞募海疚挠涗浺幌掳惭b過(guò)程。
環(huán)境準(zhǔn)備
開(kāi)始部署 TiDB 集群前,準(zhǔn)備一臺(tái)部署主機(jī),確保其軟件滿(mǎn)足需求:
- 推薦安裝 CentOS 7.3 及以上版本
- 運(yùn)行環(huán)境可以支持互聯(lián)網(wǎng)訪(fǎng)問(wèn),用于下載 TiDB 及相關(guān)軟件安裝包
注意:TiDB 從 v8.5.1 版本起重新適配 glibc 2.17,恢復(fù)了對(duì) CentOS Linux 7 的兼容性支持。
環(huán)境信息
最小規(guī)模的 TiDB 集群拓?fù)浒韵聦?shí)例:
| 組件 | 數(shù)量 | IP | 端口配置 |
|---|---|---|---|
| PD | 1 | 192.168.31.79 | 2379/2380 |
| TiDB | 1 | 192.168.31.79 | 4000/10080 |
| TiKV | 3 | 192.168.31.79 | 20160-20162/20180-20182 |
| TiFlash | 1 | 192.168.31.79 | 9000/3930/20170/20292/8234/8123 |
| Prometheus | 1 | 192.168.31.79 | 9090/12020 |
| Grafana | 1 | 192.168.31.79 | 3000 |
安裝依賴(lài)庫(kù)
編譯和構(gòu)建 TiDB 所需的依賴(lài)庫(kù):
- Golang 1.23 及以上版本
- Rust nightly-2023-12-28 及以上版本
- LLVM 17.0 及以上版本
- sshpass 1.06 及以上
- GCC 7.x(不滿(mǎn)足)
- glibc 2.28-151.el8 版本(不滿(mǎn)足)
下載所需依賴(lài)包:
- Rust 下載地址:https://forge.rust-lang.org/infra/other-installation-methods.html
- Golang 下載地址:https://go.dev/dl/
- sshpass 下載地址:https://sourceforge.net/projects/sshpass/files/latest/download
Golang 安裝:
[root@test soft]# tar -C /usr/local -xf go1.25.0.linux-amd64.tar.gz
[root@test ~]# cat<<-\EOF>>/root/.bash_profile
export PATH=$PATH:/usr/local/go/bin
EOF
[root@test ~]# source /root/.bash_profile
[root@test ~]# go version
go version go1.25.0 linux/amd64
Rust 安裝:
[root@test soft]# tar -xf rust-1.89.0-x86_64-unknown-linux-gnu.tar.tar
[root@test soft]# cd rust-1.89.0-x86_64-unknown-linux-gnu/
[root@test rust-1.89.0-x86_64-unknown-linux-gnu]# ./install.sh
[root@test ~]# rustc --version
rustc 1.89.0 (29483883e 2025-08-04)
sshpass 安裝:
[root@test soft]# tar -xf sshpass-1.10.tar.gz
[root@test soft]# cd sshpass-1.10/
[root@test sshpass-1.10]# ./configure && make && make install
[root@test ~]# sshpass -V
sshpass 1.10
關(guān)閉防火墻
[root@test ~]# systemctl stop firewalld.service
[root@test ~]# systemctl disable firewalld.service
[root@test ~]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
檢測(cè)及關(guān)閉 swap
[root@test ~]# echo "vm.swappiness = 0">> /etc/sysctl.conf
[root@test ~]# swapoff -a
[root@test ~]# sysctl -p
vm.swappiness = 0
記得修改 /etc/fstab 配置,注釋掉 swap 分區(qū):
#/dev/mapper/centos-swap swap swap defaults 0 0
檢查和配置操作系統(tǒng)優(yōu)化參數(shù)
[root@test ~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled
[root@test ~]# echo never > /sys/kernel/mm/transparent_hugepage/defrag
[root@test ~]# cat<<EOF>>/etc/sysctl.conf
fs.file-max = 1000000
net.core.somaxconn = 32768
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_syncookies = 0
vm.overcommit_memory = 1
EOF
[root@test ~]# sysctl -p
[root@test ~]# cat<<EOF>>/etc/security/limits.conf
tidb soft nofile 1000000
tidb hard nofile 1000000
tidb soft stack 32768
tidb hard stack 32768
EOF
調(diào)整 MaxSessions
由于模擬多機(jī)部署,需要通過(guò) root 用戶(hù)調(diào)大 sshd 服務(wù)的連接數(shù)限制:
[root@test ~]# vim /etc/ssh/sshd_config
## 調(diào)整 MaxSessions 20
[root@test ~]# systemctl restart sshd.service
創(chuàng)建 TiDB 用戶(hù)
[root@test ~]# useradd tidb
[root@test ~]# echo "Tidb@123" |passwd tidb --stdin
Changing password for user tidb.
passwd: all authentication tokens updated successfully.
[root@test ~]# cat<<-EOF>>/etc/sudoers
tidb ALL=(ALL) NOPASSWD: ALL
EOF
實(shí)施部署
本文是內(nèi)網(wǎng)環(huán)境,不使用官方在線(xiàn)源安裝,使用本地鏡像源進(jìn)行部署,本地鏡像源部署請(qǐng)參考:TiDB 離線(xiàn)部署 TiUP 組件。
tiup 已部署完成:
[root@test ~]# tiup mirror show
/root/tidb-community-server-v8.5.3-linux-amd64
[root@test ~]# tiup --version
1.16.2 tiup
Go Version: go1.21.13
Git Ref: v1.16.2
GitHash: 678c52de0c0ef30634b8ba7302a8376caa95d50d
創(chuàng)建并啟動(dòng)集群:
[root@test ~]# cat<<-\EOF>topo.yaml
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
user: "tidb"
ssh_port: 11122
deploy_dir: "/data/tidb-deploy"
data_dir: "/data/tidb-data"
# # Monitored variables are applied to all the machines.
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
server_configs:
tidb:
instance.tidb_slow_log_threshold: 300
tikv:
readpool.storage.use-unified-pool: false
readpool.coprocessor.use-unified-pool: true
pd:
replication.enable-placement-rules: true
replication.location-labels: ["host"]
tiflash:
logger.level: "info"
pd_servers:
- host: 192.168.31.79
tidb_servers:
- host: 192.168.31.79
tikv_servers:
- host: 192.168.31.79
port: 20160
status_port: 20180
config:
server.labels: { host: "logic-host-1" }
- host: 192.168.31.79
port: 20161
status_port: 20181
config:
server.labels: { host: "logic-host-2" }
- host: 192.168.31.79
port: 20162
status_port: 20182
config:
server.labels: { host: "logic-host-3" }
tiflash_servers:
- host: 192.168.31.79
monitoring_servers:
- host: 192.168.31.79
grafana_servers:
- host: 192.168.31.79
EOF
安裝前預(yù)檢查:
[root@test ~]# tiup cluster check topo.yaml --user root -p
Input SSH password:
+ Detect CPU Arch Name
- Detecting node 192.168.31.79 Arch info ... Done
+ Detect CPU OS Name
- Detecting node 192.168.31.79 OS info ... Done
+ Download necessary tools
- Downloading check tools for linux/amd64 ... Done
+ Collect basic system information
+ Collect basic system information
- Getting system info of 192.168.31.79:11122 ... Done
+ Check time zone
- Checking node 192.168.31.79 ... Done
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
- Checking node 192.168.31.79 ... Done
- Checking node 192.168.31.79 ... Done
- Checking node 192.168.31.79 ... Done
- Checking node 192.168.31.79 ... Done
- Checking node 192.168.31.79 ... Done
- Checking node 192.168.31.79 ... Done
- Checking node 192.168.31.79 ... Done
- Checking node 192.168.31.79 ... Done
- Checking node 192.168.31.79 ... Done
+ Cleanup check files
- Cleanup check files on 192.168.31.79:11122 ... Done
Node Check Result Message
---- ----- ------ -------
192.168.31.79 os-version Fail CentOS Linux 7 (Core) 7.9.2009 not supported, use version 9 or higher
192.168.31.79 cpu-cores Pass number of CPU cores / threads: 4
192.168.31.79 ntp Warn The NTPd daemon may be not start
192.168.31.79 disk Warn mount point /data does not have 'noatime' option set
192.168.31.79 selinux Pass SELinux is disabled
192.168.31.79 thp Pass THP is disabled
192.168.31.79 command Pass numactl: policy: default
192.168.31.79 cpu-governor Warn Unable to determine current CPU frequency governor policy
192.168.31.79 memory Pass memory size is 8192MB
192.168.31.79 network Pass network speed of ens192 is 10000MB
192.168.31.79 disk Fail multiple components tikv:/data/tidb-data/tikv-20160,tikv:/data/tidb-data/tikv-20161,tikv:/data/tidb-data/tikv-20162,tiflash:/data/tidb-data/tiflash-9000 are using the same partition 192.168.31.79:/data as data dir
192.168.31.79 disk Fail mount point /data does not have 'nodelalloc' option set
部署集群:
[root@test ~]# tiup cluster deploy lucifer v8.5.3 topo.yaml --user root -p
Input SSH password:
+ Detect CPU Arch Name
- Detecting node 192.168.31.79 Arch info ... Done
+ Detect CPU OS Name
- Detecting node 192.168.31.79 OS info ... Done
Please confirm your topology:
Cluster type: tidb
Cluster name: lucifer
Cluster version: v8.5.3
Role Host Ports OS/Arch Directories
---- ---- ----- ------- -----------
pd 192.168.31.79 2379/2380 linux/x86_64 /data/tidb-deploy/pd-2379,/data/tidb-data/pd-2379
tikv 192.168.31.79 20160/20180 linux/x86_64 /data/tidb-deploy/tikv-20160,/data/tidb-data/tikv-20160
tikv 192.168.31.79 20161/20181 linux/x86_64 /data/tidb-deploy/tikv-20161,/data/tidb-data/tikv-20161
tikv 192.168.31.79 20162/20182 linux/x86_64 /data/tidb-deploy/tikv-20162,/data/tidb-data/tikv-20162
tidb 192.168.31.79 4000/10080 linux/x86_64 /data/tidb-deploy/tidb-4000
tiflash 192.168.31.79 9000/3930/20170/20292/8234/8123 linux/x86_64 /data/tidb-deploy/tiflash-9000,/data/tidb-data/tiflash-9000
prometheus 192.168.31.79 9090/12020 linux/x86_64 /data/tidb-deploy/prometheus-9090,/data/tidb-data/prometheus-9090
grafana 192.168.31.79 3000 linux/x86_64 /data/tidb-deploy/grafana-3000
Attention:
1. If the topology is not what you expected, check your yaml file.
2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
+ Generate SSH keys ... Done
+ Download TiDB components
- Download pd:v8.5.3 (linux/amd64) ... Done
- Download tikv:v8.5.3 (linux/amd64) ... Done
- Download tidb:v8.5.3 (linux/amd64) ... Done
- Download tiflash:v8.5.3 (linux/amd64) ... Done
- Download prometheus:v8.5.3 (linux/amd64) ... Done
- Download grafana:v8.5.3 (linux/amd64) ... Done
- Download node_exporter: (linux/amd64) ... Done
- Download blackbox_exporter: (linux/amd64) ... Done
+ Initialize target host environments
- Prepare 192.168.31.79:11122 ... Done
+ Deploy TiDB instance
- Copy pd -> 192.168.31.79 ... Done
- Copy tikv -> 192.168.31.79 ... Done
- Copy tikv -> 192.168.31.79 ... Done
- Copy tikv -> 192.168.31.79 ... Done
- Copy tidb -> 192.168.31.79 ... Done
- Copy tiflash -> 192.168.31.79 ... Done
- Copy prometheus -> 192.168.31.79 ... Done
- Copy grafana -> 192.168.31.79 ... Done
- Deploy node_exporter -> 192.168.31.79 ... Done
- Deploy blackbox_exporter -> 192.168.31.79 ... Done
+ Copy certificate to remote host
+ Init instance configs
- Generate config pd -> 192.168.31.79:2379 ... Done
- Generate config tikv -> 192.168.31.79:20160 ... Done
- Generate config tikv -> 192.168.31.79:20161 ... Done
- Generate config tikv -> 192.168.31.79:20162 ... Done
- Generate config tidb -> 192.168.31.79:4000 ... Done
- Generate config tiflash -> 192.168.31.79:9000 ... Done
- Generate config prometheus -> 192.168.31.79:9090 ... Done
- Generate config grafana -> 192.168.31.79:3000 ... Done
+ Init monitor configs
- Generate config node_exporter -> 192.168.31.79 ... Done
- Generate config blackbox_exporter -> 192.168.31.79 ... Done
Enabling component pd
Enabling instance 192.168.31.79:2379
Enable instance 192.168.31.79:2379 success
Enabling component tikv
Enabling instance 192.168.31.79:20162
Enabling instance 192.168.31.79:20160
Enabling instance 192.168.31.79:20161
Enable instance 192.168.31.79:20162 success
Enable instance 192.168.31.79:20161 success
Enable instance 192.168.31.79:20160 success
Enabling component tidb
Enabling instance 192.168.31.79:4000
Enable instance 192.168.31.79:4000 success
Enabling component tiflash
Enabling instance 192.168.31.79:9000
Enable instance 192.168.31.79:9000 success
Enabling component prometheus
Enabling instance 192.168.31.79:9090
Enable instance 192.168.31.79:9090 success
Enabling component grafana
Enabling instance 192.168.31.79:3000
Enable instance 192.168.31.79:3000 success
Enabling component node_exporter
Enabling instance 192.168.31.79
Enable 192.168.31.79 success
Enabling component blackbox_exporter
Enabling instance 192.168.31.79
Enable 192.168.31.79 success
Cluster `lucifer` deployed successfully, you can start it with command: `tiup cluster start lucifer --init`
啟動(dòng)集群:
[root@test ~]# tiup cluster start lucifer --init
Starting cluster lucifer...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/lucifer/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/lucifer/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [ Serial ] - StartCluster
Starting component pd
Starting instance 192.168.31.79:2379
Start instance 192.168.31.79:2379 success
Starting component tikv
Starting instance 192.168.31.79:20162
Starting instance 192.168.31.79:20160
Starting instance 192.168.31.79:20161
Start instance 192.168.31.79:20162 success
Start instance 192.168.31.79:20161 success
Start instance 192.168.31.79:20160 success
Starting component tidb
Starting instance 192.168.31.79:4000
Start instance 192.168.31.79:4000 success
Starting component tiflash
Starting instance 192.168.31.79:9000
Start instance 192.168.31.79:9000 success
Starting component prometheus
Starting instance 192.168.31.79:9090
Start instance 192.168.31.79:9090 success
Starting component grafana
Starting instance 192.168.31.79:3000
Start instance 192.168.31.79:3000 success
Starting component node_exporter
Starting instance 192.168.31.79
Start 192.168.31.79 success
Starting component blackbox_exporter
Starting instance 192.168.31.79
Start 192.168.31.79 success
+ [ Serial ] - UpdateTopology: cluster=lucifer
Started cluster `lucifer` successfully
The root password of TiDB database has been changed.
The new password is: 'm+92G0Q3eNR4^6cq*@'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be get and shown again.
查看集群:
[root@test ~]# tiup cluster list
Name User Version Path PrivateKey
---- ---- ------- ---- ----------
lucifer tidb v8.5.3 /root/.tiup/storage/cluster/clusters/lucifer /root/.tiup/storage/cluster/clusters/lucifer/ssh/id_rsa
檢查集群狀態(tài):
[root@test ~]# tiup cluster display lucifer
Cluster type: tidb
Cluster name: lucifer
Cluster version: v8.5.3
Deploy user: tidb
SSH type: builtin
Dashboard URL: http://192.168.31.79:2379/dashboard
Dashboard URLs: http://192.168.31.79:2379/dashboard
Grafana URL: http://192.168.31.79:3000
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
192.168.31.79:3000 grafana 192.168.31.79 3000 linux/x86_64 Up - /data/tidb-deploy/grafana-3000
192.168.31.79:2379 pd 192.168.31.79 2379/2380 linux/x86_64 Up|L|UI /data/tidb-data/pd-2379 /data/tidb-deploy/pd-2379
192.168.31.79:9090 prometheus 192.168.31.79 9090/12020 linux/x86_64 Up /data/tidb-data/prometheus-9090 /data/tidb-deploy/prometheus-9090
192.168.31.79:4000 tidb 192.168.31.79 4000/10080 linux/x86_64 Up - /data/tidb-deploy/tidb-4000
192.168.31.79:9000 tiflash 192.168.31.79 9000/3930/20170/20292/8234/8123 linux/x86_64 Up /data/tidb-data/tiflash-9000 /data/tidb-deploy/tiflash-9000
192.168.31.79:20160 tikv 192.168.31.79 20160/20180 linux/x86_64 Up /data/tidb-data/tikv-20160 /data/tidb-deploy/tikv-20160
192.168.31.79:20161 tikv 192.168.31.79 20161/20181 linux/x86_64 Up /data/tidb-data/tikv-20161 /data/tidb-deploy/tikv-20161
192.168.31.79:20162 tikv 192.168.31.79 20162/20182 linux/x86_64 Up /data/tidb-data/tikv-20162 /data/tidb-deploy/tikv-20162
Total nodes: 8
安裝 MySQL 客戶(hù)端
TiDB 兼容 MySQL 協(xié)議,故需要 MySQL 客戶(hù)端連接,則需安裝 MySQL 客戶(hù)端,Linux7 版本的系統(tǒng)默認(rèn)自帶安裝了 MariaDB,需要先清理:
[root@test ~]# rpm -e --nodeps $(rpm -qa | grep mariadb)
找個(gè)有網(wǎng)的環(huán)境下載:
[root@lucifer ~]# wget https://repo.mysql.com/RPM-GPG-KEY-mysql-2023
[root@lucifer ~]# wget http://dev.mysql.com/get/mysql80-community-release-el7-10.noarch.rpm
安裝 MySQL 客戶(hù)端:
[root@test ~]# yum -y install mysql80-community-release-el7-10.noarch.rpm
[root@test ~]# rpm --import RPM-GPG-KEY-mysql-2023
[root@test ~]# yum -y install mysql
連接數(shù)據(jù)庫(kù):
## 這里的 root 初始密碼在 tidb 集群初始化時(shí)日志中輸出的密碼 m+92G0Q3eNR4^6cq*@
[root@test ~]# mysql -h 192.168.31.79 -P 4000 -uroot –p
mysql> show databases;
修改初始 root 密碼:
mysql> use mysql
mysql> alter user 'root'@'%' identified by 'tidb';
集群監(jiān)控:
- Dashboard:http://192.168.31.79:2379/dashboard (使用 root/tidb 登錄)
- Grafana:http://192.168.31.79:3000 (默認(rèn)密碼:admin/admin)
寫(xiě)在最后
至此,TiDB 單機(jī)集群部署完成,可用于開(kāi)發(fā)測(cè)試和學(xué)習(xí)研究。生產(chǎn)環(huán)境建議參考官方推薦的多機(jī)部署方案。




