메뉴 닫기

ceph map failed: (5) Input/output error

이전 글에 설명한 rbd: map failed: (6) No such device or address 의 내용과 다른 내용의 error이니
처리 방법의 혼돈하지 말 것!

OS : ubuntu 16.04 64bit
ceph ver: ceph jewel

error 내용

root@CIFS2:/# rbd map backup/backup
rbd: sysfs write failed
In some cases useful info is found in syslog – try “dmesg | tail” or so.
rbd: map failed: (5) Input/output error

ceph status

Every 2.0s: ceph -s Wed Aug 17 10:35:51 2016

cluster e4b08efd-7b6c-43f8-8bd3-588f6a9cb51f
health HEALTH_OK
monmap e2: 3 mons at {mon-0=111.111.111.111:6789/0,mon-1=111.111.111.112:6789/0,mon-2=111.111.111.113:6789/0}
election epoch 6, quorum 0,1,2 mon-0,mon-1,mon-2
fsmap e5: 1/1/1 up {0=mds=up:active}
osdmap e48: 7 osds: 7 up, 7 in
flags sortbitwise
pgmap v209: 576 pgs, 4 pools, 1253 kB data, 24 objects
274 MB used, 78216 GB / 78217 GB avail
576 active+clean

생성 pool info

root@CIFS2:/# rbd info backup/backup
rbd image ‘backup’:
size 20000 GB in 5120000 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.2b03238e1f29
format: 2
features: layering
flags

root@CIFS2:/# rbd map backup/backup
rbd: sysfs write failed
In some cases useful info is found in syslog – try “dmesg | tail” or so.
rbd: map failed: (5) Input/output error

dmesg 내용은 아래와 같음.

..생략
[84895.358921] Key type ceph registered
[84895.359291] libceph: loaded (mon/osd proto 15/24)
[84895.375775] rbd: loaded rbd (rados block device)
[84895.377830] libceph: mon0 111.111.111.23:6789 feature set mismatch, my 4a042a42 < server's 2004a042a42, missing 20000000000
[84895.377911] libceph: mon0 111.111.111.23:6789 socket error on read
[84905.412494] libceph: mon2 111.111.111.25:6789 feature set mismatch, my 4a042a42 < server's 2004a042a42, missing 20000000000
[84905.412569] libceph: mon2 111.111.111.25:6789 socket error on read
[84915.435733] libceph: mon1 111.111.111.24:6789 feature set mismatch, my 4a042a42 < server's 2004a042a42, missing 20000000000
[84915.435807] libceph: mon1 111.111.111.24:6789 socket error on read
[84925.459834] libceph: mon1 111.111.111.24:6789 feature set mismatch, my 4a042a42 < server's 2004a042a42, missing 20000000000
[84925.459908] libceph: mon1 111.111.111.24:6789 socket error on read
[84935.484988] libceph: mon2 111.111.111.25:6789 feature set mismatch, my 4a042a42 < server's 2004a042a42, missing 20000000000
[84935.485062] libceph: mon2 111.111.111.25:6789 socket error on read
[84945.509158] libceph: mon2 111.111.111.25:6789 feature set mismatch, my 4a042a42 < server's 2004a042a42, missing 20000000000
[84945.509232] libceph: mon2 111.111.111.25:6789 socket error on read
[85039.013245] libceph: mon1 111.111.111.24:6789 feature set mismatch, my 4a042a42 < server's 2004a042a42, missing 20000000000
[85039.013324] libceph: mon1 111.111.111.24:6789 socket error on read
[85049.048972] libceph: mon2 111.111.111.25:6789 feature set mismatch, my 4a042a42 < server's 2004a042a42, missing 20000000000
[85049.049047] libceph: mon2 111.111.111.25:6789 socket error on read
[85059.072941] libceph: mon0 111.111.111.23:6789 feature set mismatch, my 4a042a42 < server's 2004a042a42, missing 20000000000
[85059.073016] libceph: mon0 111.111.111.23:6789 socket error on read
[85069.097820] libceph: mon2 111.111.111.25:6789 feature set mismatch, my 4a042a42 < server's 2004a042a42, missing 20000000000
[85069.097894] libceph: mon2 111.111.111.25:6789 socket error on read
[85079.121986] libceph: mon2 111.111.111.25:6789 feature set mismatch, my 4a042a42 < server's 2004a042a42, missing 20000000000
[85079.122060] libceph: mon2 111.111.111.25:6789 socket error on read
[85089.146169] libceph: mon2 111.111.111.25:6789 feature set mismatch, my 4a042a42 up:active
2016-08-17 10:08:32.318573 7f5c6300b700 1 mds.0.4 recovery_done — successful recovery!
2016-08-17 10:08:32.318628 7f5c6300b700 1 mds.0.4 active_start

—————————-

트러블 슈팅 진행.

[처리진행]
root@client:/# ceph osd getcrushmap -o /tmp/crush
got crush map from osdmap epoch 102
root@client:/# crushtool -i /tmp/crush –set-chooseleaf_vary_r 0 -o /tmp/crush.new
root@client:/# ceph osd setcrushmap -i /tmp/crush.new
set crush map
root@client:/# ceph osd getcrushmap -o /tmp/crush
got crush map from osdmap epoch 105

[상태 확인]

Every 2.0s: ceph -s Tue Aug 16 09:47:47 2016

cluster 23a9b68c-4caf-4bef-991a-79b3afd7bf53
health HEALTH_WARN
crush map has legacy tunables (require bobtail, min is firefly)
monmap e2: 3 mons at {mon-0=111.111.111.23:6789/0,mon-1=111.111.111.24:6789/0,mon-2=111.111.111.25:678
9/0}
election epoch 32, quorum 0,1,2 mon-0,mon-1,mon-2
fsmap e28: 1/1/1 up {0=mds=up:active}
osdmap e105: 7 osds: 7 up, 7 in
flags sortbitwise
pgmap v983: 1664 pgs, 8 pools, 504 kB data, 195 objects
326 MB used, 78216 GB / 78217 GB avail
1664 active+clean

–2차 문제 발생.

문제 제기됨 crush map has legacy tunables (require bobtail, min is firefly) 다음으로 해결.

-2차 문제 해결
indra@sc-test-nfs-01:~$ ceph osd crush tunables optimal
adjusted tunables profile to optimal

[상태확인]

Every 2.0s: ceph -s Tue Aug 16 09:59:21 2016

cluster 23a9b68c-4caf-4bef-991a-79b3afd7bf53
health HEALTH_OK
monmap e2: 3 mons at {mon-0=111.111.111.23:6789/0,mon-1=111.111.111.24:6789/0,mon-2=111.111.111.25:678
9/0}
election epoch 32, quorum 0,1,2 mon-0,mon-1,mon-2
fsmap e28: 1/1/1 up {0=mds=up:active}
osdmap e113: 7 osds: 7 up, 7 in
flags sortbitwise
pgmap v1034: 1664 pgs, 8 pools, 504 kB data, 195 objects
360 MB used, 78216 GB / 78217 GB avail
1664 active+clean

다시 rbd map 진행

root@CIFS2:/# rbd map backup/backup
/dev/rbd0
정상 출력

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x