ceph benchmarking

测试前需清理系统缓存:

echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync

单个osd磁盘写性能:

# 清除缓存页,目录项和inodes
echo 3 > /proc/sys/vm/drop_caches
# 执行写命令
dd if=/dev/zero of=/var/lib/ceph/osd/lrr01 bs=1G count=1 oflag=direct

两个OSD同时写性能:

for i in `mount | grep osd | awk '{print $3}'`; do (dd if=/dev/zero of=$i/lrr01 bs=1G count=1 oflag=direct  $) ; done

单个OSD同时读性能:

dd if=/var/lib/ceph/osd/lrr01 of=/dev/null bs=2G count=1 iflag=direct

两个OSD同时读性能:

for i in `mount | grep osd | awk '{print $3}'`; do (dd if=$i/lrr01 of=/dev/null bs=1G count=1 iflag=direct &); done

3.1 rados bench

ceph内置的基于pool的测试方式,用于测量ceph储存池层面的性能,所以会比在客户端测量的性能好,因为它将副本数这个因素排除在外了。

另外,这种方式只支持顺序读写/随机读 3种模式.

命令语法:

# 默认size=4M,并发线程数=16
rados bench -p <pool_name> <seconds> <write|seq|rand> -b <block size> -t --no-cleanup

3.1.1 顺序写

rados bench -p rbd 60  write --no-cleanup

输出:

Total time run:         60.162678
Total writes made:      8574
Write size:             4194304
Object size:            4194304
Bandwidth (MB/sec):     570.054
Stddev Bandwidth:       305.605
Max bandwidth (MB/sec): 864
Min bandwidth (MB/sec): 0
Average IOPS:           142
Stddev IOPS:            76
Max IOPS:               216
Min IOPS:               0
Average Latency(s):     0.11226
Stddev Latency(s):      0.271863
Max latency(s):         3.78996
Min latency(s):         0.0239517

3.1.2 顺序读

rados bench -p rbd 60 seq

输出:

Total time run:       23.059467
Total reads made:     8574
Read size:            4194304
Object size:          4194304
Bandwidth (MB/sec):   1487.28
Average IOPS          371
Stddev IOPS:          17
Max IOPS:             399
Min IOPS:             338
Average Latency(s):   0.0422848
Max latency(s):       0.485136
Min latency(s):       0.00517209

3.1.3 随机读

rados bench -p rbd 60 rand

输出:

Total time run:       60.047550
Total reads made:     22582
Read size:            4194304
Object size:          4194304
Bandwidth (MB/sec):   1504.27
Average IOPS:         376
Stddev IOPS:          14
Max IOPS:             424
Min IOPS:             342
Average Latency(s):   0.0417499
Max latency(s):       0.486202
Min latency(s):       0.00513183

测试完,删除池里面的内容:

rados -p rbd cleanup

3.2 rados load-gen

参数设置:

--num-objects           初始生成测试用的对象数,默认 200
--min-object-size       测试对象的最小大小,默认 1KB,单位byte 
--max-object-size       测试对象的最大大小,默认 5GB,单位byte
--min-op-len            压测IO的最小大小,默认 1KB,单位byte
--max-op-len            压测IO的最大大小,默认 2MB,单位byte
--max-ops               一次提交的最大IO数,相当于iodepth
--target-throughput     一次提交IO的历史累计吞吐量上限,默认 5MB/s,单位B/s
--max-backlog           一次提交IO的吞吐量上限,默认10MB/s,单位B/s
--read-percent          读写混合中读的比例,默认80,范围[0, 100]
--run-length            运行的时间,默认60s,单位秒

一个典型的用例,整体的流程和rados bench差不多,但是支持跟丰富的测试语义:

# 测试4KB写,iodepth=64,速度不限
rados -p rbd load-gen --num-objects 128 --min-object-size 8192 --max-object-size 8192 --run-length 20 --read-percent 0 --min-op-len 4096 --max-op-len 4096 --target-throughput 104857600 --max_backlog 104857600 --max-ops 64

4.1 使用rbd bench-write进行测试

yum install -y librbd1-devel
rbd bench-write disk01 --pool=rbd --io-size=4K --io-threads=16 --io-pattern=<seq顺序写|rand随机写>

输出:

bench-write  io_size 4096 io_threads 16 bytes 1073741824 pattern sequential
  SEC       OPS   OPS/SEC   BYTES/SEC
    1     68432  68457.01  280399929.10
    2    129537  64780.59  265341301.90
    3    195110  65044.78  266423436.34
elapsed:     4  ops:   262144  ops/sec: 60655.53  bytes/sec: 248445034.43

4.2 使用fio进行测试

  • fio可以测试 cephfs/rbd;

4.2.1 配置文件方式

  • 支持rbd引擎,不需要挂载rbd设备即可测试;
  • 指定连接ceph的用户名(keyring),pool,rbd镜像名

将下面的参数写入rbd.fio:

[global]
ioengine=rbd
clientname=admin
pool=rbd
rbdname=disk01
direct=1
time_based
runtime=60
size=2g
rw=randwrite
bs=4k
[rbd_iodepth32]
iodepth=32

运行上面指定的参数:

fio rbd.fio

4.2.2 命令行的方式

fio --filename=/mnt/test-file --direct=1 -iodepth 32 --thread --rw=randwrite --ioengine=libaio --bs=4k -size=2G -runtime=60 --group_reporting --name=mytest 

4.2.3 脚本方式

  • storage/ceph/ceph性能测试.txt
  • 最后更改: 2020/08/05 03:45
  • (外部编辑)