部署方法:
- Cephadm安装并管理使用容器和systemd并与CLI和仪表板GUI紧密集成的Ceph群集。
- cephadm只支持Octopus和更新的版本
- cephadm与业务流程AP1完全集成,并完全支持用于管理群集部署的CL1和仪表板功能
- cephadm需要容器支持 (以Podman或Docker的形式)和Python 3。
- Rook部署和管理运行在Kubernetes中的Ceph集群,同时还通过Kubernetes API启用存储资源管理和配置。建议将Rook作为在Kubernetes中运行Ceph或将现有Ceph存储集群连接到Kubernetes的方式
- Rook只支持Nautilus和Ceph的新版本。
- Rook是在Kubernetes上运行Ceph或将Kubernetes群集连接到现有 (外部) Ceph群集的首选方法
- Rook支持协调器API。完全支持CLI和仪表板中的管理功能
- Ceph-ansible使用Ansible部署和管理Ceph集群
- Ceph-deploy是一种可用于快速部署集群的工具。(不推荐)
- Ceph-Deploy没有得到积极的维护。它没有在比Nautilus更新的版本上进行测试。它不支持RHEL 8、CentOS 8以及更高版本
- DeepSea用Salt安装了Ceph
- jaas.ai/ceph-mon 使用 Juju 安装 Ceph。
- github.com/openstack/puppet-ceph通过Puppet来安装Ceph。
- Ceph也可以手动安装
本次集群搭建使用的是CentOS8
https://mirrors.163.com/centos/8-stream/isos/x86_64
yum仓库:
https://mirrors.163.com/centos/8-stream/BaseOS
https://mirrors.163.com/centos/8-stream/AppStream
ceph源:
https://mirrors.163.com/ceph/rpm-pacific/el8/x86_64
https://mirrors.163.com/ceph/rpm-pacific/el8/noarch
epel源:
https://mirrors.aliyun.com/epel/8/Everything/x86_64
环境准备
Ceph-node1 | 192.168.123.105 | 192.168.10.1 |
Ceph-node2 | 192.168.123.106 | 192.168.10.2 |
Ceph-node3 | 192.168.123.107 | 192.168.10.3 |
Ceph-client | 192.168.123.110 | 192.168.10.4 |
1.系统初始化(三个node节点和一个client同时配置)
1.1查看Linux系统版本
cat /etc/redhat-release
输出:CentOS Stream release 8
1.2 配置主机名和修改hosts⽂件,⽤来内部解析。
node1:hostnamectl set-hostname Ceph-node1
node2:hostnamectl set-hostname Ceph-node2
node3: hostnamectl set-hostname Ceph-node3
client: hostnamectl set-hostname Ceph-client
1.3修改完毕后,可以使用命令查看
cat /etc/hostname
1.4修改完主机名后,我们配置hosts⽂件,⽤来进⾏解析:在三个节点和client同时写⼊此⽂件。
#vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.123.105 ceph-node1
192.168.123.106 ceph-node2
192.168.123.107 ceph-node3
192.168.123.108 ceph-client
92.168.10.101 ceph-node1
192.168.10.102 ceph-node2
192.168.10.103 ceph-node3
192.168.10.104 ceph-client
1.5配置yum仓库
cd /etc/yum.repos.d/
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
#vim ceph.repo
[ceph-noarch]
name=ceph-noarch
baseurl=https://mirrors.163.com/ceph/rpm-pacific/el8/noarch/
enabled=1
gpgcheck=0
[ceph-x86-64]
name=ceph-x86-64
baseurl=https://mirrors.163.com/ceph/rpm-pacific/el8/x86_64/
enabled=1
gpgcheck=0
[epel-x86-64]
name=epel-x86-64
baseurl=https://mirrors.aliyun.com/epel/8/Everything/x86_64/
enabled=1
gpgcheck=0
1.6关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
1.7关闭selinux
vim /etc/selinux/config ##SELINUX后面改为disabled
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
暂时关闭selinux
setenforce 0
1.8设置时间同步
yum -y install chrony
systemctl enable chronyd
systemctl start chronyd
vim /etc/chrony.conf
server time1.aliyun.com iburst
systemctl restart chronyd
chronyc sources -v
1.9配置免密登录(此步骤只在cephadm节点执行)
ssh-keygen
ssh-copy-id root@192.168.123.106
ssh-copy-id root@192.168.123.107
ssh-copy-id root@192.168.123.108
1.10安装podman并配置加速
yum -y install podman
podman --version
vim /etc/docker/daemon.json
{
"registry-mirrors": ["http://docker.zhu123.fun"]
}
2.部署集群
2.1安装cephadm
yum -y install cephadm
yum -y install epel-release
yum -y install python3
2.2需要bootstrap引导初始化集群(将node1作为bootstrap引导节点)
cephadm bootstrap --mon-ip 192.168.123.105 --cluster-network 192.168.10.0/24 --initial-dashboard-password redhat --dashboard-password-noupdate
/--mon-ip 设置mon映射ip
/--cluster-network 设置集群私网地址
/--initial-dashboard-password 设置dashboard面板登录密码
/--dashboard-password-noupdate 设置dashboard面板登录密码不被修改
2.3下载ceph-common包(执行ceph cli 方便tab补全)
yum -y install ceph-common
ceph -v #测试一下ceph cli能否使用
注意:这时我们使用ceph -s 查看集群状态时,发现健康为HEALTH_WARN,是正常现象,默认count是三个osd,我们还未创建
2.4手动添加主机host
ceph orch host ls ##查看ceph集群中有哪些主机
1、拷贝pub公钥到各个节点
ssh-copy-id -f -i /etc/ceph/ceph.pub 用户名@主机名
2、添加主机到集群中
ceph orch host add 主机名|域名 IP
[root@ceph-node1 ~]# ceph orch host ls
HOST ADDR LABELS STATUS
ceph-client 192.168.123.108
ceph-node1 192.168.123.105 _admin
ceph-node2 192.168.123.106
ceph-node3 192.168.123.107
2.5给ceph-client打_admin标签(使该主机也拥有管理集群的能力)
ceph orch host label add ceph-client _admin
[root@ceph-node1 ~]# ceph orch host ls
HOST ADDR LABELS STATUS
ceph-client 192.168.123.108 _admin
ceph-node1 192.168.123.105 _admin
ceph-node2 192.168.123.106
ceph-node3 192.168.123.107
scp /etc/ceph/ceph.conf ceph-client
scp /etc/ceph/ceph.client.admin.keyring ceph-client
添加OSD(三种方法)
ceph orch device ls #查看有哪些osd设备
HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS
ceph-node1 /dev/sda hdd VMware_Virtual_S_00000000000000000001 20.0G Yes 8s ago
ceph-node2 /dev/sda hdd VMware_Virtual_S_00000000000000000001 20.0G Yes 6s ago
ceph-node3 /dev/sda hdd VMware_Virtual_S_00000000000000000001 20.0G Yes 6s ago
AVAILABLE一栏中显示yes,表示可用创建OSD
1.所有的可用的设备自动添加:
ceph orch apply osd --all-available-devices #所有可用的设备作为O
2.手动的扩容:
(1)关闭自动化部署的开关
ceph orch apply osd --all-available-devices --unmanaged=true
(2)手动添加:
ceph orch daemon osd add 主机:磁盘名称
3.通过yaml