ceph mimic手册

  • 服务器数量: 3
  • OSD数量: 2*3=6 intel s3510 480G
  • hostname: ceph1,ceph2,ceph3
  • ceph-deploy node: ceph1

@ ALL NODES:

# 配置时间同步
yum install ntpdate -y
echo "* * * * * root ntpdate stdtime.gov.hk > /dev/null 2>&1" >> /etc/crontab
systemctl restart crond
setenforce 0 && iptables -F && service iptables save
# 安装依赖
yum install xfsprogs python2-pip yum-plugin-priorities -y
# 配置ceph mimic repo
curl -ko /etc/yum.repos.d/ceph.repo https://wiki2.xbits.net:4430/_export/code/storage:ceph:ceph.repo?codeblock=0
cat /etc/yum.repos.d/ceph.repo
# 配置hosts
cat >> /etc/hosts <<'EOF'
10.28.200.28  ceph1
10.28.200.29  ceph2
10.28.200.30  ceph3
EOF

本次部署利用ceph-deploy工具进行.

@ceph-deploy NODE:

# 安装ceph-ploy
yum install ceph-deploy -y

配置ceph-dploy节点到其他所有节点的ssh免密:

# ssh免密登录其他Ceph节点
ssh-keygen
ssh-copy-id ceph1~3
# verify
ssh ceph1 hostname

创建ceph-deploy配置目录:1)

mkdir my-cluster
cd my-cluster
# install ceph software
ceph-deploy install --no-adjust-repos ceph1 ceph2 ceph3
# install monitors
ceph-deploy new ceph1 ceph2 ceph3

修改my-cluster/ceph.conf:

mon_clock_drift_allowed = 5
mon_clock_drift_warn_backoff = 10
rbd_default_features = 3

初始化monitors:

# create-initial
ceph-deploy mon create-initial
# push configs
ceph-deploy admin ceph1 ceph2 ceph3

ceph1,ceph2,ceph3都是OSD节点,每个节点有2个intel s3510 480G 固态盘.

@OSD NODES:

# 清理涉及的所有OSD
wipefs -af /dev/sda /dev/sdb
# 创建OSD
ceph-deploy osd create --data /dev/sda ceph1
ceph-deploy osd create --data /dev/sdb ceph1
ceph-deploy osd create --data /dev/sda ceph2
ceph-deploy osd create --data /dev/sdb ceph2
ceph-deploy osd create --data /dev/sda ceph3
ceph-deploy osd create --data /dev/sdb ceph3

@MGR NODES:

ceph-deploy mgr create ceph1
ceph -s

添加配置到 my-cluster/ceph.conf:

[mon]
mgr initial modules = dashboard
# push 到全部节点
ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3
# dashboard 需要SSL证书支持
ceph mgr module enable dashboard
ceph dashboard create-self-signed-cert
# 配置登录用户
ceph dashboard set-login-credentials admin admin

完成后通过URL登录mimic 新的dashboard: https://mgr_node:8443/

@ALL NODES:

sudo chmod +r /etc/ceph/ceph.client.admin.keyring

创建pool,rbd设备:

# 创建名为images的pool
ceph osd pool create images 256
ceph osd pool application enable images rbd
ceph osd lspools
# 命令行手动创建一个名为disk01的disk image
rbd -p images create disk01 --size 40G
rbd -p images ls -l

benckmark:

6 OSD,2副本,4K 达到10k IOPS:

@rgw NODES:

http://docs.ceph.com/docs/mimic/mgr/dashboard/#accessing-the-dashboard

安装rgw:

ceph-deploy install --rgw --no-adjust-repos ceph1
ceph-deploy rgw create ceph1
netstat -antlp|grep 7480

通过 http://rgw_node:7480 验证服务状态

7.1. dashboard管理rgw

# 命令行创建api用户 apiuser
radosgw-admin user create --uid=apiuser --display-name=apiuser --system
记录"access_key""secret_key"
ceph dashboard set-rgw-api-access-key <access_key>
ceph dashboard set-rgw-api-secret-key <secret_key>

此时就能通过dashboard 来管理rgw服务了。


1)
新的cluster 配置将会写入此目录
  • storage/ceph/mimic手册.txt
  • 最后更改: 2019/04/16 18:31
  • (外部编辑)