Ceph-Luminous版本安装

基于CentOS 7.5 1804
ceph-deploy 版本:2.0.1
ceph版本:12.2.13 luminous (stable)

设置yum源

在全部控制与计算节点设置epel与ceph yum源(base yum源已更新),以ceph205节点为例;

epel:http://mirrors.aliyun.com/repo/

1
[root@ceph205 ~]# wget -O /etc/yum.repos.d/epel-7.repo http://mirrors.aliyun.com/repo/epel-7.repo

ceph:http://mirrors.aliyun.com/ceph/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# 编辑ceph.repo文件,使用luminous版本
[root@ceph205 ~]# vim /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
[ceph-source]
name=ceph-source
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
[root@ceph205 ~]# yum clean all
[root@ceph205 ~]# yum makecache

设置主机名

所有节点设置相应主机名即可,以ceph205节点为例;
[root@localhost ~]# hostnamectl set-hostname ceph205

1
2
3
4
5
6
7
[root@ceph205 ~]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.201 controller
192.168.1.202 compute202
192.168.1.205 ceph205

设置ntp

所有节点保持时钟同步,配置ceph205节点为时钟服务器
[root@ceph205 ~]# yum -y install chrony

所有节点保持时钟同步,
配置ceph205节点为例配置controller

1
2
server controller iburst
allow 192.168.1.0/24

所有节点服务重启,并查看同步状态

1
2
systemctl restart chronyd.service
chronyc sources -v

关闭防火墙selinux

1
2
3
[root@ceph205 ~]# systemctl stop firewalld
[root@ceph205 ~]# systemctl disable firewalld
[root@ceph205 ~]# setenforce 0

创建用户

所有节点执行

1
2
3
4
[root@ceph205 ~]# useradd -d /home/ceph -m cephde
[root@ceph205 ~]# passwd cephde
New password: 123456
Retype new password: 123456
1
2
3
4
# 修改visudo文件,使cephde用户在sudo列表中;
# 在92行” root ALL=(ALL) ALL”下新增一行:” cephde ALL=(ALL) ALL
[root@ceph205 ~]# visudo
cephde ALL=(ALL) ALL

用户赋权

设置cephde用户具备无密码sudo(root)权限;
切换到cephde用户下操作

1
2
3
4
[root@ceph205 ~]# su - cephde
[cephde@ceph205 ~]$ echo "cephde ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephde
[sudo] password for cephde:123456
[cephde@ceph205 ~]$ sudo chmod 0440 /etc/sudoers.d/cephde

设置ssh免密登陆

ceph-deploy不支持密码输入,需要在管理控制节点生产ssh秘钥,并将公钥分发到各ceph节点;
在用户cephde下生成秘钥,不能使用sudo或root用户;
默认在用户目录下生成~/.ssh目录,含生成的秘钥对;
“Enter passphrase”时,回车,口令为空;

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@ceph205 ~]# su - cephde
[cephde@ceph205 ~]$ ssh-keygen -t rsa
Enter file in which to save the key (/home/ceph/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:

Your identification has been saved in /home/ceph/.ssh/id_rsa.
Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:3KermFNE4L0YONDp2Duj0XsPxtAVLcLsTWXYbdtT+HI cephde@ceph205
The key's randomart image is:
+---[RSA 2048]----+
| .. +...=o. . |
| .oo+.=oo o . .|
| +o.o=o. . o o |
| . oo.=oo . + E|
| ...o.S . . + |
| . =o . o |
| o ++. . |
| . ..o+ . |
| .ooo.. |
+----[SHA256]-----+

分发密钥

前提是各控制与存储节点已生成相关用户;
初次连接其他节点时需要确认;
首次分发公钥需要密码;
分发成功后,在~/.ssh/下生成known_hosts文件,记录相关登陆信息;
以ceph205节点免密登陆ceph206节点为例

1
2
3
[cephde@ceph205 ~]$ ssh-copy-id cephde@ceph206
Are you sure you want to continue connecting (yes/no)? yes
cephde@ceph206's password:

安装ceph-deploy

在规划的控制管理节点安装ceph-deploy工具,以ceph205节点为例,这里我只把ceph205规划为管理节点。

1
[root@ceph205 ~]# yum install ceph-deploy -y

创建ceph集群

在cephde账户下操作,切忌使用sudo操作;
在管理节点上生成一个目录用于存放集群相关配置文件;

1
2
[root@ceph205 ~]# su - cephde
[cephde@ceph205 ~]$ mkdir cephcluster

后续ceph-deploy相关操作全部在所创建的目录执行;
将规划中的MON(monitor)节点纳入集群,即创建集群

1
2
3
4
5
6
7
8
9
10
[cephde@ceph205 ~]$ cd ~/cephcluster/
[cephde@ceph205 cephcluster]$ ceph-deploy new ceph205
报错如下:
[cephde@ceph205 cephcluster]$ ceph-deploy new ceph205
Traceback (most recent call last):
File "/bin/ceph-deploy", line 18, in <module>
from ceph_deploy.cli import main
File "/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 1, in <module>
import pkg_resources
ImportError: No module named pkg_resources

原因:缺少python-setuptools安装包
[cephde@ceph205 ~]$ sudo yum install python-setuptools
在所有节点安装,否则会报错

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy new ceph205
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0x7f9eeac24d70>
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f9eea39e3b0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : ['ceph205']
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph205][DEBUG ] connection detected need for sudo
[ceph205][DEBUG ] connected to host: ceph205
[ceph205][DEBUG ] detect platform information from remote host
[ceph205][DEBUG ] detect machine type
[ceph205][DEBUG ] find the location of an executable
[ceph205][INFO ] Running command: sudo /usr/sbin/ip link show
[ceph205][INFO ] Running command: sudo /usr/sbin/ip addr show
[ceph205][DEBUG ] IP addresses found: [u'192.168.1.205']
[ceph_deploy.new][DEBUG ] Resolving host ceph205
[ceph_deploy.new][DEBUG ] Monitor ceph205 at 192.168.1.205
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph205']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.1.205']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.ke

修改集群配置文件(optional)

生成集群后在集群目录下生成3个文件,其中ceph.conf即是配置文件;
默认可不修改,为使服务按规划启动,可做适当修改;
以下红色字体部分是在默认生成的conf文件上新增的配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[cephde@ceph205 cephcluster]$ cat ceph.conf 
[global]
fsid = 1f277463-7f9b-46cd-8ed7-c44e1493b131
mon_initial_members = ceph205
mon_host = 192.168.1.205
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
# public network:前端mon网络,client访问网络;确保public network与mon_host在相同网段,否则初始化时可能会有错误;
# cluster network:后端osd心跳,数据/流复制恢复等网络
public network = 192.168.1.0/24
cluster network = 192.168.1.0/24

# 默认的副本数为3,实验环境变更为2
osd pool default size = 2

# 默认保护机制不允许删除pool,根据情况设置
mon_allow_pool_delete = true

安装ceph

在全部控制管理与存储节点安装ceph;
理论上在控制节点的ceph集群目录使用ceph-deploy可统一安装,命令:ceph-deploy install ceph205 ceph206 …;
但由于网速原因大概率会失败,可在各存储节点独立安装ceph与ceph-radosgw,以ceph205节点为例

1
[root@ceph205 ~]# yum install -y ceph ceph-radosgw

初始化ceph_mon

在控制管理节点初始化monitor

1
[cephde@ceph205 cephcluster]$ ceph-deploy mon create-initial
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f5de121ed40>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x7f5de1275398>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph205
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph205 ...
[ceph205][DEBUG ] connection detected need for sudo
[ceph205][DEBUG ] connected to host: ceph205
[ceph205][DEBUG ] detect platform information from remote host
[ceph205][DEBUG ] detect machine type
[ceph205][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[ceph205][DEBUG ] determining if provided host has same hostname in remote
[ceph205][DEBUG ] get remote short hostname
[ceph205][DEBUG ] deploying mon to ceph205
[ceph205][DEBUG ] get remote short hostname
[ceph205][DEBUG ] remote hostname: ceph205
[ceph205][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph205][DEBUG ] create the mon path if it does not exist
[ceph205][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph205/done
[ceph205][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph205/done
[ceph205][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph205.mon.keyring
[ceph205][DEBUG ] create the monitor keyring file
[ceph205][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i ceph205 --keyring /var/lib/ceph/tmp/ceph-ceph205.mon.keyring --setuser 167 --setgroup 167
[ceph205][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph205.mon.keyring
[ceph205][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph205][DEBUG ] create the init path if it does not exist
[ceph205][INFO ] Running command: sudo systemctl enable ceph.target
[ceph205][INFO ] Running command: sudo systemctl enable ceph-mon@ceph205
[ceph205][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@ceph205.service to /usr/lib/systemd/system/ceph-mon@.service.
[ceph205][INFO ] Running command: sudo systemctl start ceph-mon@ceph205
[ceph205][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph205.asok mon_status
[ceph205][DEBUG ] ********************************************************************************
[ceph205][DEBUG ] status for monitor: mon.ceph205
[ceph205][DEBUG ] {
[ceph205][DEBUG ] "election_epoch": 3,
[ceph205][DEBUG ] "extra_probe_peers": [],
[ceph205][DEBUG ] "feature_map": {
[ceph205][DEBUG ] "mon": {
[ceph205][DEBUG ] "group": {
[ceph205][DEBUG ] "features": "0x3ffddff8eeacfffb",
[ceph205][DEBUG ] "num": 1,
[ceph205][DEBUG ] "release": "luminous"
[ceph205][DEBUG ] }
[ceph205][DEBUG ] }
[ceph205][DEBUG ] },
[ceph205][DEBUG ] "features": {
[ceph205][DEBUG ] "quorum_con": "4611087853746454523",
[ceph205][DEBUG ] "quorum_mon": [
[ceph205][DEBUG ] "kraken",
[ceph205][DEBUG ] "luminous"
[ceph205][DEBUG ] ],
[ceph205][DEBUG ] "required_con": "153140804152475648",
[ceph205][DEBUG ] "required_mon": [
[ceph205][DEBUG ] "kraken",
[ceph205][DEBUG ] "luminous"
[ceph205][DEBUG ] ]
[ceph205][DEBUG ] },
[ceph205][DEBUG ] "monmap": {
[ceph205][DEBUG ] "created": "2022-03-13 21:57:15.211494",
[ceph205][DEBUG ] "epoch": 1,
[ceph205][DEBUG ] "features": {
[ceph205][DEBUG ] "optional": [],
[ceph205][DEBUG ] "persistent": [
[ceph205][DEBUG ] "kraken",
[ceph205][DEBUG ] "luminous"
[ceph205][DEBUG ] ]
[ceph205][DEBUG ] },
[ceph205][DEBUG ] "fsid": "b6904020-ab52-497b-9078-c130310853bb",
[ceph205][DEBUG ] "modified": "2022-03-13 21:57:15.211494",
[ceph205][DEBUG ] "mons": [
[ceph205][DEBUG ] {
[ceph205][DEBUG ] "addr": "192.168.1.205:6789/0",
[ceph205][DEBUG ] "name": "ceph205",
[ceph205][DEBUG ] "public_addr": "192.168.1.205:6789/0",
[ceph205][DEBUG ] "rank": 0
[ceph205][DEBUG ] }
[ceph205][DEBUG ] ]
[ceph205][DEBUG ] },
[ceph205][DEBUG ] "name": "ceph205",
[ceph205][DEBUG ] "outside_quorum": [],
[ceph205][DEBUG ] "quorum": [
[ceph205][DEBUG ] 0
[ceph205][DEBUG ] ],
[ceph205][DEBUG ] "rank": 0,
[ceph205][DEBUG ] "state": "leader",
[ceph205][DEBUG ] "sync_provider": []
[ceph205][DEBUG ] }
[ceph205][DEBUG ] ********************************************************************************
[ceph205][INFO ] monitor: mon.ceph205 is running
[ceph205][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph205.asok mon_status
[ceph_deploy.mon][INFO ] processing monitor mon.ceph205
[ceph205][DEBUG ] connection detected need for sudo
[ceph205][DEBUG ] connected to host: ceph205
[ceph205][DEBUG ] detect platform information from remote host
[ceph205][DEBUG ] detect machine type
[ceph205][DEBUG ] find the location of an executable
[ceph205][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph205.asok mon_status
[ceph_deploy.mon][INFO ] mon.ceph205 monitor has reached quorum!
[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmpJsXF9o
[ceph205][DEBUG ] connection detected need for sudo
[ceph205][DEBUG ] connected to host: ceph205
[ceph205][DEBUG ] detect platform information from remote host
[ceph205][DEBUG ] detect machine type
[ceph205][DEBUG ] get remote short hostname
[ceph205][DEBUG ] fetch remote file
[ceph205][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph205.asok mon_status
[ceph205][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph205/keyring auth get client.admin
[ceph205][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph205/keyring auth get-or-create client.admin osd allow * mds allow * mon allow * mgr allow *
[ceph205][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph205/keyring auth get client.bootstrap-mds
[ceph205][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph205/keyring auth get-or-create client.bootstrap-mds mon allow profile bootstrap-mds
[ceph205][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph205/keyring auth get client.bootstrap-mgr
[ceph205][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph205/keyring auth get-or-create client.bootstrap-mgr mon allow profile bootstrap-mgr
[ceph205][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph205/keyring auth get client.bootstrap-osd
[ceph205][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph205/keyring auth get-or-create client.bootstrap-osd mon allow profile bootstrap-osd
[ceph205][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph205/keyring auth get client.bootstrap-rgw
[ceph205][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph205/keyring auth get-or-create client.bootstrap-rgw mon allow profile bootstrap-rgw
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpJsXF9o

初始化完成后,在集群目录下新增多个秘钥文件

1
2
3
4
5
6
7
8
9
10
[cephde@ceph205 cephcluster]$ ls -l
total 44
-rw-------. 1 cephde cephde 71 Mar 13 21:57 ceph.bootstrap-mds.keyring
-rw-------. 1 cephde cephde 71 Mar 13 21:57 ceph.bootstrap-mgr.keyring
-rw-------. 1 cephde cephde 71 Mar 13 21:57 ceph.bootstrap-osd.keyring
-rw-------. 1 cephde cephde 71 Mar 13 21:57 ceph.bootstrap-rgw.keyring
-rw-------. 1 cephde cephde 63 Mar 13 21:57 ceph.client.admin.keyring
-rw-rw-r--. 1 cephde cephde 319 Mar 13 21:51 ceph.conf
-rw-rw-r--. 1 cephde cephde 15527 Mar 13 21:57 ceph-deploy-ceph.log
-rw-------. 1 cephde cephde 73 Mar 13 21:45 ceph.mon.keyring

查看状态

1
2
3
4
5
6
7
8
9
10
[cephde@ceph205 cephcluster]$ sudo systemctl status ceph-mon@ceph205
● ceph-mon@ceph205.service - Ceph cluster monitor daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2022-03-13 21:57:15 EDT; 2min 1s ago
Main PID: 12389 (ceph-mon)
CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@ceph205.service
└─12389 /usr/bin/ceph-mon -f --cluster ceph --id ceph205 --setuser ceph --setgroup ceph...

Mar 13 21:57:15 ceph205 systemd[1]: Started Ceph cluster monitor daemon.
Mar 13 21:57:15 ceph205 systemd[1]: Starting Ceph cluster monitor daemon...

分发ceph.conf与秘钥

分发ceph配置文件与秘钥到其他控制管理节点与存储节点;
注意分发节点本身也需要包含在内,默认没有秘钥文件,需要分发;
如果被分发节点已经配置文件(统一变更配置文件场景),可以使用如下命令:

1
ceph-deploy  --overwrite-conf admin xxx

分发的配置文件与秘钥在各节点/etc/ceph/目录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[cephde@ceph205 cephcluster]$ ceph-deploy admin ceph205
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy admin ceph205
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f58e80d52d8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : ['ceph205']
[ceph_deploy.cli][INFO ] func : <function admin at 0x7f58e8be51b8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph205
[ceph205][DEBUG ] connection detected need for sudo
[ceph205][DEBUG ] connected to host: ceph205
[ceph205][DEBUG ] detect platform information from remote host
[ceph205][DEBUG ] detect machine type
[ceph205][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

安装ceph_mgr

luminous版本必须安装mgr(dashboard)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
[cephde@ceph205 cephcluster]$  ceph-deploy mgr create ceph205:ceph205_mgr
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy mgr create ceph205:ceph205_mgr
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] mgr : [('ceph205', 'ceph205_mgr')]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f670a3505f0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mgr at 0x7f670ac320c8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph205:ceph205_mgr
[ceph205][DEBUG ] connection detected need for sudo
[ceph205][DEBUG ] connected to host: ceph205
[ceph205][DEBUG ] detect platform information from remote host
[ceph205][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph205
[ceph205][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph205][WARNIN] mgr keyring does not exist yet, creating one
[ceph205][DEBUG ] create a keyring file
[ceph205][DEBUG ] create path recursively if it doesn't exist
[ceph205][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph205_mgr mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph205_mgr/keyring
[ceph205][INFO ] Running command: sudo systemctl enable ceph-mgr@ceph205_mgr
[ceph205][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph205_mgr.service to /usr/lib/systemd/system/ceph-mgr@.service.
[ceph205][INFO ] Running command: sudo systemctl start ceph-mgr@ceph205_mgr
[ceph205][INFO ] Running command: sudo systemctl enable ceph.target

查看状态

1
2
3
4
5
6
7
[cephde@ceph205 cephcluster]$ systemctl status ceph-mgr@ceph205_mgr
● ceph-mgr@ceph205_mgr.service - Ceph cluster manager daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mgr@.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2022-03-13 22:01:43 EDT; 36s ago
Main PID: 13048 (ceph-mgr)
CGroup: /system.slice/system-ceph\x2dmgr.slice/ceph-mgr@ceph205_mgr.service
└─13048 /usr/bin/ceph-mgr -f --cluster ceph --id ceph205_mgr --setuser ceph --setgroup ...

启动mgr

可查看mgr默认开启的服务:(sudo) ceph mgr module ls;
默认dashboard服务在可开启列表中,但并未启动,需要手工开启

1
[cephde@ceph205 cephcluster]$ sudo ceph mgr module enable dashboard

dashboard服务已开启,默认监听全部地址的tcp7000端口;
如果需要设置dashboard的监听地址与端口,如下:
设置监听地址:(sudo) ceph config-key put mgr/dashboard/server_addr x.x.x.x
设置监听端口:(sudo) ceph config-key put mgr/dashboard/server_port x

1
[cephde@ceph205 cephcluster]$ sudo netstat -tunlp | grep mgr

没有netstat工具的先安装net-tools

1
2
3
[cephde@ceph205 cephcluster]$ sudo netstat -tunlp | grep mgr
tcp 0 0 192.168.1.205:6800 0.0.0.0:* LISTEN 13048/ceph-mgr
tcp6 0 0 :::7000 :::* LISTEN 13048/ceph-mgr

web登陆: http://192.168.1.205:7000/

Ceph

查看集群状态

查看monitor状态

1
2
[cephde@ceph205 cephcluster]$ sudo ceph mon stat
e1: 1 mons at {ceph205=192.168.1.205:6789/0}, election epoch 3, leader 0 ceph205, quorum 0 ceph205

查看ceph状态:ceph health (detail),ceph -s,ceph -w等;
状态显示mgr处于active-standby模式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[cephde@ceph205 cephcluster]$ sudo ceph -s
cluster:
id: b6904020-ab52-497b-9078-c130310853bb
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 2

services:
mon: 1 daemons, quorum ceph205
mgr: ceph205_mgr(active)
osd: 0 osds: 0 up, 0 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0B
usage: 0B used, 0B / 0B avail
pgs:

可在各节点查看认证信息等

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[cephde@ceph205 cephcluster]$ sudo ceph auth list
installed auth entries:

client.admin
key: AQD/oC5i6dI/CxAAIR4LRDD5lp4zRYi7tAsMAA==
caps: [mds] allow *
caps: [mgr] allow *
caps: [mon] allow *
caps: [osd] allow *
client.bootstrap-mds
key: AQD/oC5iUfrENhAA+7kkfqcDNWsiIBrEmtRtnQ==
caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgr
key: AQAAoS5iwaskJxAA6/AHsOW5EcWhwRSCVmGCkw==
caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osd
key: AQABoS5iXEqJGRAAnHW4Li928UaUAreGLvdyWw==
caps: [mon] allow profile bootstrap-osd
client.bootstrap-rgw
key: AQACoS5i0NZ9ERAA71OfRtQSyv9ar1ROTFEYWA==
caps: [mon] allow profile bootstrap-rgw
mgr.ceph205_mgr
key: AQAGoi5iEjbiMhAACS7iZgSN35Rzu1O7ojvneg==
caps: [mds] allow *
caps: [mon] allow profile mgr
caps: [osd] allow *

创建osd(存储)

osd位于存储节点,可查看存储节点磁盘状况,以ceph205节点为例
新添加了一块标准硬盘 100G

1
2
3
4
5
6
7
8
9
[cephde@ceph205 cephcluster]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─centos-root 253:0 0 17G 0 lvm /
└─centos-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 100G 0 disk
sr0 11:0 1 1024M 0 rom

实际创建osd时,可通过管理节点使用ceph-deploy创建;
本例中有1个osd节点,每个osd节点可运行4个osd进程(在6800~7300端口范围内,每进程监听1个本地端口)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
[cephde@ceph205 cephcluster]$ ceph-deploy osd create ceph205 --data /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy osd create ceph205 --data /dev/sdb
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fec16c56320>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] block_wal : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] journal : None
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] host : ceph205
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] func : <function osd at 0x7fec16c8a848>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.cli][INFO ] data : /dev/sdb
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb
[ceph205][DEBUG ] connection detected need for sudo
[ceph205][DEBUG ] connected to host: ceph205
[ceph205][DEBUG ] detect platform information from remote host
[ceph205][DEBUG ] detect machine type
[ceph205][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph205
[ceph205][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph205][WARNIN] osd keyring does not exist yet, creating one
[ceph205][DEBUG ] create a keyring file
[ceph205][DEBUG ] find the location of an executable
[ceph205][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[ceph205][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph205][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 07ec31da-a1ba-48ee-a833-37a6eba0615f
[ceph205][WARNIN] Running command: vgcreate --force --yes ceph-5605d7c5-8a23-4e96-ba0d-6633fcdfde09 /dev/sdb
[ceph205][WARNIN] stdout: Physical volume "/dev/sdb" successfully created.
[ceph205][WARNIN] stdout: Volume group "ceph-5605d7c5-8a23-4e96-ba0d-6633fcdfde09" successfully created
[ceph205][WARNIN] Running command: lvcreate --yes -l 100%FREE -n osd-block-07ec31da-a1ba-48ee-a833-37a6eba0615f ceph-5605d7c5-8a23-4e96-ba0d-6633fcdfde09
[ceph205][WARNIN] stdout: Logical volume "osd-block-07ec31da-a1ba-48ee-a833-37a6eba0615f" created.
[ceph205][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph205][WARNIN] Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[ceph205][WARNIN] Running command: restorecon /var/lib/ceph/osd/ceph-0
[ceph205][WARNIN] Running command: chown -h ceph:ceph /dev/ceph-5605d7c5-8a23-4e96-ba0d-6633fcdfde09/osd-block-07ec31da-a1ba-48ee-a833-37a6eba0615f
[ceph205][WARNIN] Running command: chown -R ceph:ceph /dev/dm-2
[ceph205][WARNIN] Running command: ln -s /dev/ceph-5605d7c5-8a23-4e96-ba0d-6633fcdfde09/osd-block-07ec31da-a1ba-48ee-a833-37a6eba0615f /var/lib/ceph/osd/ceph-0/block
[ceph205][WARNIN] Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[ceph205][WARNIN] stderr: got monmap epoch 1
[ceph205][WARNIN] Running command: ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQBipC5i8TYVNBAAmu4r3BrEFQF+gdSB5VgM9w==
[ceph205][WARNIN] stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[ceph205][WARNIN] added entity osd.0 auth auth(auid = 18446744073709551615 key=AQBipC5i8TYVNBAAmu4r3BrEFQF+gdSB5VgM9w== with 0 caps)
[ceph205][WARNIN] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[ceph205][WARNIN] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[ceph205][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 07ec31da-a1ba-48ee-a833-37a6eba0615f --setuser ceph --setgroup ceph
[ceph205][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdb
[ceph205][WARNIN] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph205][WARNIN] Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-5605d7c5-8a23-4e96-ba0d-6633fcdfde09/osd-block-07ec31da-a1ba-48ee-a833-37a6eba0615f --path /var/lib/ceph/osd/ceph-0
[ceph205][WARNIN] Running command: ln -snf /dev/ceph-5605d7c5-8a23-4e96-ba0d-6633fcdfde09/osd-block-07ec31da-a1ba-48ee-a833-37a6eba0615f /var/lib/ceph/osd/ceph-0/block
[ceph205][WARNIN] Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[ceph205][WARNIN] Running command: chown -R ceph:ceph /dev/dm-2
[ceph205][WARNIN] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph205][WARNIN] Running command: systemctl enable ceph-volume@lvm-0-07ec31da-a1ba-48ee-a833-37a6eba0615f
[ceph205][WARNIN] stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-07ec31da-a1ba-48ee-a833-37a6eba0615f.service to /usr/lib/systemd/system/ceph-volume@.service.
[ceph205][WARNIN] Running command: systemctl enable --runtime ceph-osd@0
[ceph205][WARNIN] stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /usr/lib/systemd/system/ceph-osd@.service.
[ceph205][WARNIN] Running command: systemctl start ceph-osd@0
[ceph205][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[ceph205][WARNIN] --> ceph-volume lvm create successful for: /dev/sdb
[ceph205][INFO ] checking OSD status...
[ceph205][DEBUG ] find the location of an executable
[ceph205][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph205 is now ready for osd use.

查看osd状态

在管理节点查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
[cephde@ceph205 cephcluster]$ ceph-deploy osd list ceph205
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy osd list ceph205
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : list
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa82a791320>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] host : ['ceph205']
[ceph_deploy.cli][INFO ] func : <function osd at 0x7fa82a7c5848>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph205][DEBUG ] connection detected need for sudo
[ceph205][DEBUG ] connected to host: ceph205
[ceph205][DEBUG ] detect platform information from remote host
[ceph205][DEBUG ] detect machine type
[ceph205][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Listing disks on ceph205...
[ceph205][DEBUG ] find the location of an executable
[ceph205][INFO ] Running command: sudo /usr/sbin/ceph-volume lvm list
[ceph205][DEBUG ]
[ceph205][DEBUG ]
[ceph205][DEBUG ] ====== osd.0 =======
[ceph205][DEBUG ]
[ceph205][DEBUG ] [block] /dev/ceph-5605d7c5-8a23-4e96-ba0d-6633fcdfde09/osd-block-07ec31da-a1ba-48ee-a833-37a6eba0615f
[ceph205][DEBUG ]
[ceph205][DEBUG ] type block
[ceph205][DEBUG ] osd id 0
[ceph205][DEBUG ] cluster fsid b6904020-ab52-497b-9078-c130310853bb
[ceph205][DEBUG ] cluster name ceph
[ceph205][DEBUG ] osd fsid 07ec31da-a1ba-48ee-a833-37a6eba0615f
[ceph205][DEBUG ] encrypted 0
[ceph205][DEBUG ] cephx lockbox secret
[ceph205][DEBUG ] block uuid rP8tcN-oDS7-ALLN-S2iA-wW8F-nIcz-m73oWh
[ceph205][DEBUG ] block device /dev/ceph-5605d7c5-8a23-4e96-ba0d-6633fcdfde09/osd-block-07ec31da-a1ba-48ee-a833-37a6eba0615f
[ceph205][DEBUG ] vdo 0
[ceph205][DEBUG ] crush device class None
[ceph205][DEBUG ] devices /dev/sdb

在管理节点查看osd状态等

1
2
3
4
5
6
7
[cephde@ceph205 cephcluster]$ sudo ceph osd stat
1 osds: 1 up, 1 in
[cephde@ceph205 cephcluster]$ sudo ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.09769 root default
-3 0.09769 host ceph205
0 hdd 0.09769 osd.0 up 1.00000 1.00000

在管理节点查看容量及使用情况

1
2
3
4
5
6
[cephde@ceph205 cephcluster]$ sudo ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
100GiB 99.0GiB 1.00GiB 1.00
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS

在osd节点查看

1
2
3
4
5
6
7
8
9
10
11
[cephde@ceph205 cephcluster]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─centos-root 253:0 0 17G 0 lvm /
└─centos-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 100G 0 disk
└─ceph--5605d7c5--8a23--4e96--ba0d--6633fcdfde09-osd--block--07ec31da--a1ba--48ee--a833--37a6eba0615f
253:2 0 100G 0 lvm
sr0 11:0 1 1024M 0 rom

ceph-osd进程,根据启动顺序,每个osd进程有特定的序号

1
2
3
4
5
6
7
8
[cephde@ceph205 cephcluster]$ systemctl status ceph-osd@0
● ceph-osd@0.service - Ceph object storage daemon osd.0
Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled-runtime; vendor preset: disabled)
Active: active (running) since Sun 2022-03-13 22:11:51 EDT; 3min 30s ago
Process: 23971 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
Main PID: 23976 (ceph-osd)
CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@0.service
└─23976 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph

osd进程端口号;或:ps aux | grep osd | grep -v grep

1
2
3
4
5
[cephde@ceph205 cephcluster]$ sudo netstat -tunlp | grep osd
tcp 0 0 192.168.1.205:6801 0.0.0.0:* LISTEN 23976/ceph-osd
tcp 0 0 192.168.1.205:6802 0.0.0.0:* LISTEN 23976/ceph-osd
tcp 0 0 192.168.1.205:6803 0.0.0.0:* LISTEN 23976/ceph-osd
tcp 0 0 192.168.1.205:6804 0.0.0.0:* LISTEN 23976/ceph-osd

或登陆mgr_dashboard:http://192.168.1.205:7000
Ceph2


Ceph-Luminous版本安装
http://maitianxin.github.io/2022/03/14/other/ceph/
作者
Matianxin
发布于
2022年3月14日
许可协议