30 September 2016

Integrasi Ceph Block Device dan OpenStack di openSUSE Leap 42

Referensi:
http://docs.ceph.com

Kebutuhan:
  1. Single Node OpenStack dengan DevStack di openSUSE Leap 42
  2. Ceph Cluster di openSUSE Leap 42
 Topologi:

 Langkah-langkah:

I. Membuat RBD (Eksekusi di Node ceph-mon)

1. Login sebagai stack
su - stack

2. Membuat pool
ceph osd pool create volumes 128
ceph osd pool create images 128
ceph osd pool create vms 128
ceph osd pool ls

3. Instal paket ceph client  di node openstack
ssh openstack sudo zypper -n install python-rbd ceph-common

4. Salin berkas ceph.conf
cat /etc/ceph/ceph.conf | ssh openstack sudo tee /etc/ceph/ceph.conf

5. Membuat user cinder dan glance
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

6. Tambahkan keyring client.glance dan client.cinder ke node openstack
ceph auth get-or-create client.glance | ssh openstack sudo tee /etc/ceph/ceph.client.glance.keyring
ssh openstack sudo chown stack:users /etc/ceph/ceph.client.glance.keyring

ceph auth get-or-create client.cinder | ssh openstack sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh openstack sudo chown stack:users /etc/ceph/ceph.client.cinder.keyring

ceph auth get-key client.cinder | ssh openstack tee client.cinder.key


II. OpenStack Glance, Cinder, Nova (Eksekusi di node openstack)

1. Login sebagai user stack
su -  stack


2. Hapus semua image, volume dan instance.


3. Acak dan tambahkan secret key ke libvirt.

##### Generate UUID, lewati saja ##### ###
#uuidgen
#457eb676-33da-42ec-9a8c-9293d545c337
################################

cat > secret.xml <
  457eb676-33da-42ec-9a8c-9293d545c337
 
    client.cinder secret
 

EOF


sudo virsh secret-define --file secret.xml
sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat /home/stack/client.cinder.key)


4. Konfigurasi glance 

[ ! -f /opt/stack/glance/etc/glance-api.conf.orig ] && cp -v /opt/stack/glance/etc/glance-api.conf /opt/stack/glance/etc/glance-api.conf.orig
vim /opt/stack/glance/etc/glance-api.conf

[DEFAULT]
...
show_image_direct_url = True

[glance_store]

#filesystem_store_datadir = /opt/stack/data/glance/images/
stores = rbd,http
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8

Restart service Glance API di screen stack (g-api)


5. Konfigurasi cinder
[ ! -f /etc/cinder/cinder.conf.orig ] && cp -v /etc/cinder/cinder.conf /etc/cinder/cinder.conf.orig
vi /etc/cinder/cinder.conf

[DEFAULT]

#default_volume_type = lvmdriver-1
#enabled_backends = lvmdriver-1
default_volume_type =ceph
enabled_backends = ceph 
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true

#[lvmdriver-1]
#lvm_type = default
#iscsi_helper = tgtadm
#volume_group = stack-volumes-lvmdriver-1
#volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
#volume_backend_name = lvmdriver-1

[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337

Restart service Cinder API (c-api)
Restart service Cinder Scheduler (c-sch)
Restart service Cinder Volume (c-vol)


6. Verifikasi
cd ~/devstack
source openrc admin admin
cinder service-list
cinder service-disable openstack@lvmdriver-1 cinder-volume
cinder service-list

mysql -u root -prahasia -e "update services set deleted = 1 where host like 'openstack@lvm' and disabled = 1" cinder
cinder service-list


7. Uji membuat volume
openstack volume list
openstack volume create --size 1 vol0
openstack volume list
lvs


8. Verifikasi RBD pool volumes di node ceph-mon
ssh -l stack ceph-mon "rbd -p volumes ls"



9. Konfigurasi nova
sudo vim /etc/ceph/ceph.conf 

[client]
rbd cache = true
rbd cache writethrough until flush = true
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
rbd concurrent management ops = 20


sudo mkdir -p /var/run/ceph/guests/ /var/log/qemu/
sudo chown qemu:libvirt /var/run/ceph/guests /var/log/qemu/

sudo vim /etc/nova/nova.conf

[libvirt]
...
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
disk_cachemodes="network=writeback"
inject_password = false
inject_key = false
inject_partition = -2
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
hw_disk_discard = unmap

Restart service Nova Compute (n-cpu)

10. Create images
openstack image list
yum -y install wget
wget -c http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
openstack image create --disk-format qcow2 --file cirros-0.3.4-x86_64-disk.img --protected --public cirros-0.3.4-x86_64-disk
openstack image list
ls -lh /opt/stack/data/glance/images/


11. Verifikasi RBD pool images di node ceph-mon

ssh -l stack ceph-mon "rbd -p images ls"


12. Launch instance openstack server list
openstack flavor list
openstack image list
neutron net-list
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-disk --nic net-id=[NET-EXT-ID] instance0
openstack server list
ls -lh/opt/stack/data/nova/instances/YYYYYYYYYYYYYYYYYYYY

13. Verifikasi RBD pool vms di node ceph-mon
ssh -l stack ceph-mon "rbd -p vms ls"

No comments: