메뉴 닫기

ceph 설치 직후 trouble shooting

트러블슈팅

관련 health HEALTH_WARN about ceph

[상태]

^Croot@mgmt:/var/run# ceph -w
cluster sda23r4ds-123d-3fd0-c342-2dasd124sd
health HEALTH_WARN
clock skew detected on mon.mon-2
21 pgs degraded
21 pgs stuck degraded
21 pgs stuck unclean
21 pgs stuck undersized
21 pgs undersized
Monitor clock skew detected
monmap e1: 2 mons at {mon-1=xxx.xxx.xxx.1:6789/0,mon-2=xxx.xxx.xxx.2:6789/0}
election epoch 4, quorum 0,1 mon-1,mon-2
osdmap e27: 4 osds: 3 up, 3 in; 9 remapped pgs
pgmap v647: 64 pgs, 1 pools, 0 bytes data, 0 objects
110 MB used, 20014 GB / 20014 GB avail
43 active+clean
21 active+undersized+degraded

2015-10-16 10:12:19.455840 mon.0 [WRN] mon.1 xxx.xxx.xxx.2:6789/0 clock skew 0.18373s > max 0.05s

===========================

15:14:24.493650 mon.0 [INF] HEALTH_WARN; clock skew detected on mon.mon-2; 30 pgs degraded; 19 pgs stuck degraded; 30 pgs stuck unclean; 19 pgs stuck undersized; 30 pgs undersized; Monitor clock skew detected

11:00:00.000269 mon.0 [INF] HEALTH_WARN; 30 pgs degraded; 30 pgs stuck degraded; 43 pgs stuck unclean; 30 pgs stuck undersized; 30 pgs undersized

Monitor clock skew detected
(시간 동기화 작업으로 해결)

방법.

root@mon-1:~# sudo ntpdate ntp.postech.ac.kr

시간 동기화 후 각 mon 데몬 재시작.

root@mon-1:~# sudo stop ceph-mon-all
ceph-mon-all stop/waiting
root@mon-1:~# sudo start ceph-mon-all
ceph-mon-all start/running

이후 확인

root@mgmt:/var/run# ceph -w
cluster sda23r4ds-123d-3fd0-c342-2dasd124sd
health HEALTH_WARN
21 pgs degraded
21 pgs stuck degraded
21 pgs stuck unclean
21 pgs stuck undersized
21 pgs undersized
monmap e1: 2 mons at {mon-1=xxx.xxx.xxx.1:6789/0,mon-2=xxx.xxx.xxx.2:6789/0}
election epoch 6, quorum 0,1 mon-1,mon-2
osdmap e27: 4 osds: 3 up, 3 in; 9 remapped pgs
pgmap v647: 64 pgs, 1 pools, 0 bytes data, 0 objects
110 MB used, 20014 GB / 20014 GB avail
43 active+clean
21 active+undersized+degraded

2015-10-16 10:18:23.399155 mon.0 [INF] osdmap e27: 4 osds: 3 up, 3 in

======================================================

[INF] HEALTH_WARN; clock skew detected on mon.mon-2; 21 pgs degraded; 21 pgs stuck degraded; 21 pgs stuck unclean; 21 pgs stuck undersized; 21 pgs undersized;
오류 해결.

기존 값 확인

root@mgmt:~# ceph osd dump | grep ^pool
pool 0 ‘rbd’ replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0

기본 셋팅시 사용에 적합하도록 적용되지 않는 것들이 있어 수동으로 해주어야한다.
다음의 값들을 바꿔준다.

All of pools have those properties selected (see Ceph documentation for more details):

size = 3
min_size = 2
pg_num = 64
pgp_num = 64

For each pool decrease min_size to 1 and increase pg_num and pgp_num to 128:

1.ceph osd pool set rbd min_size 1
2.ceph osd pool set rbd pg_num 128
3.ceph osd pool set rbd pgp_num 128

2번항목 실행 후 몇초간 기라려 주어라.
바로 실행시 다음과 같은 오류들을 볼 수 있을 것이다. 당황하지 말고 ~ 적용되는데 시간이 걸린다.

root@mgmt:~# ceph osd pool set rbd pgp_num 128
Error EBUSY: currently creating pgs, wait

적용 후

ceph -w 를 확인 해보면 다음의 메시지들이 출력됨을 볼 수 있다.

2015-10-19 15:11:43.348202 osd.3 [INF] 0.34 scrub starts
2015-10-19 15:11:43.349341 osd.3 [INF] 0.34 scrub ok
2015-10-19 15:11:44.348448 osd.3 [INF] 0.35 scrub starts
2015-10-19 15:11:44.349705 osd.3 [INF] 0.35 scrub ok
2015-10-19 15:11:46.348916 osd.3 [INF] 0.36 scrub starts
2015-10-19 15:11:46.350185 osd.3 [INF] 0.36 scrub ok
2015-10-19 15:11:51.653279 mon.0 [INF] pgmap v823: 128 pgs: 30 active+undersized+degraded, 13 active+remapped, 85 active+clean; 0 bytes data, 112 MB used, 20014 GB / 20014 GB avail

*각각의 노드에 모드 적용된 것을 확인 할 수 있다.
==================================================================

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x