Luminous 新功能之DEVICE CLASS

DEVICE CLASS为新增的管理设备类的功能,可以将特定一组设备指定为类,创建rule的时候直接指定class即可,以前也可以实现该功能,只不过需要编辑crushmap,手动添加class。该功能只需要命令行就可以实现。

cursh class就是为了ceph不同类型的设备(HDD,SSD,NVMe)提供一个合理的默认,以便用户不必自己手动编辑指定。这相当于给磁盘组一个统一的class标签,根据class创建rule,然后根据role创建pool,整个操作不需要手动修改crushmap。

1.1. 创建DEVICE CLASS

新特性能在OSD创建时自动归类,所以无需手动指定CLASS了,如果Linux驱动识别磁盘类型有误,可以手动更改CLASS:

ceph osd crush rm-device-class osd.2 osd.3
ceph osd crush set-device-class ssd osd.2 osd.3

查看osd tree:

ceph osd tree

部署OSD时指定 class,比如指定部署磁盘所在的 OSD 到指定 class 中: <code bash> ceph-disk prepare –crush-device-class <class> /dev/XXX </code> 创建3 HDD OSDs 和 2 SSD OSDs: <code bash> for i in b c d; do ceph-disk prepare –crush-device-class hdd /dev/sd${i}; done for i in e f; do ceph-disk prepare –crush-device-class ssd /dev/sd${i}; done </code>

1.2. 查看DEVICE CLASS

# 列出 class
ceph osd crush class ls
# 列出属于ssd class的osd
ceph osd crush class ls-osd ssd
# rename class
ceph osd crush class rename ..

注意第二列CLASS即为新特性:

$ ceph osd tree
ID CLASS WEIGHT   TYPE NAME      STATUS REWEIGHT PRI-AFF 
-1       83.17899 root default                           
-4       23.86200     host cpach                         
 2   hdd  1.81898         osd.2      up  1.00000 1.00000 
 3   hdd  1.81898         osd.3      up  1.00000 1.00000 
 4   hdd  1.81898         osd.4      up  1.00000 1.00000 
 5   hdd  1.81898         osd.5      up  1.00000 1.00000 
 6   hdd  1.81898         osd.6      up  1.00000 1.00000 
 7   hdd  1.81898         osd.7      up  1.00000 1.00000 
 8   hdd  1.81898         osd.8      up  1.00000 1.00000 
15   hdd  1.81898         osd.15     up  1.00000 1.00000 
10  nvme  0.93100         osd.10     up  1.00000 1.00000 
 0   ssd  0.93100         osd.0      up  1.00000 1.00000 
 9   ssd  0.93100         osd.9      up  1.00000 1.00000 
11   ssd  0.93100         osd.11     up  1.00000 1.00000 
12   ssd  0.93100         osd.12     up  1.00000 1.00000 
13   ssd  0.93100         osd.13     up  1.00000 1.00000 
14   ssd  0.93100         osd.14     up  1.00000 1.00000 
16   ssd  0.93100         osd.16     up  1.00000 1.00000 
17   ssd  0.93100         osd.17     up  1.00000 1.00000 
18   ssd  0.93100         osd.18     up  1.00000 1.00000 
...

CRUSH rules can restrict placement to a specific device class.
For example, we can trivially create a “fast” pool that distributes data only over SSDs (with a failure domain of host) with the command:

# ceph osd crush rule create-replicated <rule-name> <root> <failure-domain-type> <device-class>
ceph osd crush rule create-replicated fast default host ssd
# 列出CRUSH rules
ceph osd crush rule ls
# 在fast 这个rule上建立一个pool
ceph osd pool create testpool 64 64 fast
# 查看pools
ceph osd pool ls

The process for creating erasure code rules 1) is slightly different. First, you create an erasure code profile that includes a property for your desired device class.

# 利用CLASS 创建erasure code profile
ceph osd erasure-code-profile set myprofile k=4 m=2 crush-device-class=ssd crush-failure-domain=host
# Then use that profile when creating the erasure coded pool.
ceph osd pool create ecpool 64 erasure myprofile


1)
erasure code = Ceph纠错码
  • storage/ceph/device_class.txt
  • 最后更改: 2019/04/16 18:31
  • (外部编辑)