openstack ceph volume-backed 서버 스냅샷 복원

| 2019년 12월 23일 | 0 Comments

오픈스택에서 ceph를 backend로 volume-backed 인스턴스를 생성하여 운영 할때는 현재 까진 재설치를 지원 하지 않기에 볼륨을 재생성하여 인스턴스를 만들어야 하는 불편함이 있습니다.

그래서 ceph 스냅샷을 이용하여 volume-backed 서버를 초기 스냅샷으로 복원하는 방법으로 재설치 기능을 구현 할수 있습니다.

복원 테스트에 ceph 버전은 nautilus, openstack 버전은 queens 버전을 이용하여 복원 하였습니다.

 


 

  • 인스턴스, 볼륨 상태 확인

    root@controller:~# nova list
    +————————————–+————–+——–+————+————-+———————————-+
    | ID | Name | Status | Task State | Power State | Networks |
    +————————————–+————–+——–+————+————-+———————————-+
    | 37f31369-1e00-4bed-ba33-134f5a8ba742 | testvm1_0001 | ACTIVE | – | Running | testvm1=10.2.0.19, xxx.xxx.xx.93 |
    +————————————–+————–+——–+————+————-+———————————-+

    root@controller:~# cinder list
    +————————————–+——–+————–+——+————-+———-+————————————–+
    | ID | Status | Name | Size | Volume Type | Bootable | Attached to |
    +————————————–+——–+————–+——+————-+———-+————————————–+
    | 21c04de7-8c41-4744-bc95-4fd58c017eaf | in-use | volume_0001 | 10 | ssd_a3 | false | 37f31369-1e00-4bed-ba33-134f5a8ba742 |
    +————————————–+——–+————–+——+————-+———-+————————————–+

  • snapshot 다중 생성

    root@testvm1_0001:~# mount /dev/vdb /mnt

    root@testvm1_0001:~# cd /mnt

    root@testvm1_0001:/mnt# dd if=/dev/zero of=./snap1 bs=1M count=100
    100+0 records in
    100+0 records out
    104857600 bytes (105 MB, 100 MiB) copied, 0.093598 s, 1.1 GB/s

    root@testvm1_0001:/mnt# ls
    lost+found snap1

    root@controller:~# cinder snapshot-create –name snap1 –force True 21c04de7-8c41-4744-bc95-4fd58c017eaf
    +————-+————————————–+
    | Property | Value |
    +————-+————————————–+
    | created_at | 2019-12-16T04:44:01.170400 |
    | description | None |
    | id | 48b82001-26a7-4f75-bea3-08ff30c0580f |
    | metadata | {} |
    | name | snap1 |
    | size | 10 |
    | status | creating |
    | updated_at | None |
    | volume_id | 21c04de7-8c41-4744-bc95-4fd58c017eaf |
    +————-+————————————–+

    root@controller:~# rbd diff queens_pool/volume-21c04de7-8c41-4744-bc95-4fd58c017eaf | awk ‘{ SUM += $2 } END { print SUM/1024/1024/1024 ” GB” }’
    0.440193 GB

  • snapshot 확인

    root@controller:~# cinder snapshot-list
    +————————————–+————————————–+———–+——-+——+
    | ID | Volume ID | Status | Name | Size |
    +————————————–+————————————–+———–+——-+——+
    | 1b873b91-9c13-4727-839a-e13754aa4924 | 21c04de7-8c41-4744-bc95-4fd58c017eaf | available | snap3 | 10 |
    | 29e56686-4203-4a74-9596-21bb97ba73cc | 21c04de7-8c41-4744-bc95-4fd58c017eaf | available | snap2 | 10 |
    | 48b82001-26a7-4f75-bea3-08ff30c0580f | 21c04de7-8c41-4744-bc95-4fd58c017eaf | available | snap1 | 10 |
    | 9b166bf4-01fb-4af5-9e65-785ce910bbb3 | 21c04de7-8c41-4744-bc95-4fd58c017eaf | available | snap4 | 10 |
    +————————————–+————————————–+———–+——-+——+

    root@controller:~# rbd list -l -p queens_pool
    NAME SIZE PARENT FMT PROT LOCK
    volume-21c04de7-8c41-4744-bc95-4fd58c017eaf 10 GiB 2 excl
    volume-21c04de7-8c41-4744-bc95-4fd58c017eaf@jyh-snapshot-48b82001-26a7-4f75-bea3-08ff30c0580f 10 GiB 2 yes
    volume-21c04de7-8c41-4744-bc95-4fd58c017eaf@jyh-snapshot-29e56686-4203-4a74-9596-21bb97ba73cc 10 GiB 2 yes
    volume-21c04de7-8c41-4744-bc95-4fd58c017eaf@jyh-snapshot-1b873b91-9c13-4727-839a-e13754aa4924 10 GiB 2 yes
    volume-21c04de7-8c41-4744-bc95-4fd58c017eaf@jyh-snapshot-9b166bf4-01fb-4af5-9e65-785ce910bbb3 10 GiB 2 yes

    root@controller:~# ceph df |grep queens_pool
    queens_pool 6 780 MiB 224 2.2 GiB 0.02 4.4 TiB

  • snapshot rbd로 복원

    root@controller:~# rbd snap rollback -p queens_pool volume-21c04de7-8c41-4744-bc95-4fd58c017eaf@jyh-snapshot-29e56686-4203-4a74-9596-21bb97ba73cc
    Rolling back to snapshot: 0% complete…failed.
    rbd: rollback failed: (30) Read-only file system
    # 볼륨이 attach 상태에선 롤백이 안된다.

    root@controller:~# openstack server remove volume testvm1_0001 volume_0001

    root@controller:~# openstack volume list
    +————————————–+————–+———–+——+————-+
    | ID | Name | Status | Size | Attached to |
    +————————————–+————–+———–+——+————-+
    | 21c04de7-8c41-4744-bc95-4fd58c017eaf | volume_0001 | available | 10 | |
    +————————————–+————–+———–+——+————-+

    root@controller:~# rbd snap rollback -p queens_volumes_ssd_x3 volume-21c04de7-8c41-4744-bc95-4fd58c017eaf@jyh-snapshot-29e56686-4203-4a74-9596-21bb97ba73cc
    Rolling back to snapshot: 100% complete…done.

    root@controller:~# openstack server add volume testvm1_0001 volume_0001

    root@testvm1_0001:~# fdisk -l
    Disk /dev/vda: 25 GiB, 26843545600 bytes, 52428800 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: 5D3036B0-DA35-4E51-9DB7-95BCBB8EC576

    Device Start End Sectors Size Type
    /dev/vda1 227328 52428766 52201439 24.9G Linux filesystem
    /dev/vda14 2048 10239 8192 4M BIOS boot
    /dev/vda15 10240 227327 217088 106M EFI System

    Partition table entries are not in disk order.

    Disk /dev/vdc: 10 GiB, 10737418240 bytes, 20971520 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes

    root@testvm1_0001:~# mount /dev/vdc /mnt

    root@testvm1_0001:~# cd /mnt

    root@testvm1_0001:/mnt# ls
    lost+found snap1 snap2

    ※ snap2로 복원 하였는데 정상적으로 복원된게 확인된다.

Category: 가상화/클라우드

장영호

About the Author ()