메뉴 닫기

metadata server )ceph mds) reinstall

mds remove

root@mgmt:~/cephcluster# ceph mds fail 0

root@mgmt:~/cephcluster# ceph mds rm 0
mds gid 0 dne

root@mgmt:~/cephcluster# ceph mds stat
e51:

root@mds:~# sudo apt-get purge ceph-fuse ceph-mds libcephfs1 -y

root@mgmt:~/cephcluster# ceph-deploy install mds
root@mgmt:~/cephcluster# ceph-deploy admin mds
root@mgmt:~/cephcluster# ceph-deploy mds create mds

Every 2.0s: ceph -s Mon Nov 28 11:36:08 2016

cluster b69c3ee5-3bbb-4dd4-a885-54e43399e3da
health HEALTH_OK
monmap e1: 3 mons at {mon0-217-20=172.16.217.20:6789/0,mon1-217-21=172.16.217.21:6789/0,mon2-217-22=172.16.21
7.22:6789/0}
election epoch 402, quorum 0,1,2 mon0-217-20,mon1-217-21,mon2-217-22
osdmap e937: 15 osds: 15 up, 15 in
flags sortbitwise
pgmap v74980: 0 pgs, 0 pools, 0 bytes data, 0 objects
793 MB used, 86432 GB / 86432 GB avail

여기서 문제 이곳 까지의 ceph -s 는 정상 출력되나 metadata와 data pool을 생성 하였더니??

Every 2.0s: ceph -s

cluster asdf343r-23rf-4ffw-df4f-124adsf3a
health HEALTH_ERR

???
의 상태 메시지르 출력하였다.

mds stat 는 creating …….

health detail 에는 해당 풀에 대한 “stuck inactive” 이 출력..

문제 해결은 매우 간단하다

mon create-initial 진행. 후~

crushmap을 정리하여 진해하라 ~
그럼
Every 2.0s: ceph -s Mon Nov 28 11:36:08 2016

cluster b69c3ee5-3bbb-4dd4-a885-54e43399e3da
health HEALTH_OK
가 뜰 것이다.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x