30 September 2016

Integrasi Ceph Block Device dan OpenStack di openSUSE Leap 42

Referensi:
http://docs.ceph.com

Kebutuhan:
  1. Single Node OpenStack dengan DevStack di openSUSE Leap 42
  2. Ceph Cluster di openSUSE Leap 42
 Topologi:

 Langkah-langkah:

I. Membuat RBD (Eksekusi di Node ceph-mon)

1. Login sebagai stack
su - stack

2. Membuat pool
ceph osd pool create volumes 128
ceph osd pool create images 128
ceph osd pool create vms 128
ceph osd pool ls

3. Instal paket ceph client  di node openstack
ssh openstack sudo zypper -n install python-rbd ceph-common

4. Salin berkas ceph.conf
cat /etc/ceph/ceph.conf | ssh openstack sudo tee /etc/ceph/ceph.conf

5. Membuat user cinder dan glance
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

6. Tambahkan keyring client.glance dan client.cinder ke node openstack
ceph auth get-or-create client.glance | ssh openstack sudo tee /etc/ceph/ceph.client.glance.keyring
ssh openstack sudo chown stack:users /etc/ceph/ceph.client.glance.keyring

ceph auth get-or-create client.cinder | ssh openstack sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh openstack sudo chown stack:users /etc/ceph/ceph.client.cinder.keyring

ceph auth get-key client.cinder | ssh openstack tee client.cinder.key


II. OpenStack Glance, Cinder, Nova (Eksekusi di node openstack)

1. Login sebagai user stack
su -  stack


2. Hapus semua image, volume dan instance.


3. Acak dan tambahkan secret key ke libvirt.

##### Generate UUID, lewati saja ##### ###
#uuidgen
#457eb676-33da-42ec-9a8c-9293d545c337
################################

cat > secret.xml <
  457eb676-33da-42ec-9a8c-9293d545c337
 
    client.cinder secret
 

EOF


sudo virsh secret-define --file secret.xml
sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat /home/stack/client.cinder.key)


4. Konfigurasi glance 

[ ! -f /opt/stack/glance/etc/glance-api.conf.orig ] && cp -v /opt/stack/glance/etc/glance-api.conf /opt/stack/glance/etc/glance-api.conf.orig
vim /opt/stack/glance/etc/glance-api.conf

[DEFAULT]
...
show_image_direct_url = True

[glance_store]

#filesystem_store_datadir = /opt/stack/data/glance/images/
stores = rbd,http
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8

Restart service Glance API di screen stack (g-api)


5. Konfigurasi cinder
[ ! -f /etc/cinder/cinder.conf.orig ] && cp -v /etc/cinder/cinder.conf /etc/cinder/cinder.conf.orig
vi /etc/cinder/cinder.conf

[DEFAULT]

#default_volume_type = lvmdriver-1
#enabled_backends = lvmdriver-1
default_volume_type =ceph
enabled_backends = ceph 
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true

#[lvmdriver-1]
#lvm_type = default
#iscsi_helper = tgtadm
#volume_group = stack-volumes-lvmdriver-1
#volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
#volume_backend_name = lvmdriver-1

[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337

Restart service Cinder API (c-api)
Restart service Cinder Scheduler (c-sch)
Restart service Cinder Volume (c-vol)


6. Verifikasi
cd ~/devstack
source openrc admin admin
cinder service-list
cinder service-disable openstack@lvmdriver-1 cinder-volume
cinder service-list

mysql -u root -prahasia -e "update services set deleted = 1 where host like 'openstack@lvm' and disabled = 1" cinder
cinder service-list


7. Uji membuat volume
openstack volume list
openstack volume create --size 1 vol0
openstack volume list
lvs


8. Verifikasi RBD pool volumes di node ceph-mon
ssh -l stack ceph-mon "rbd -p volumes ls"



9. Konfigurasi nova
sudo vim /etc/ceph/ceph.conf 

[client]
rbd cache = true
rbd cache writethrough until flush = true
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
rbd concurrent management ops = 20


sudo mkdir -p /var/run/ceph/guests/ /var/log/qemu/
sudo chown qemu:libvirt /var/run/ceph/guests /var/log/qemu/

sudo vim /etc/nova/nova.conf

[libvirt]
...
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
disk_cachemodes="network=writeback"
inject_password = false
inject_key = false
inject_partition = -2
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
hw_disk_discard = unmap

Restart service Nova Compute (n-cpu)

10. Create images
openstack image list
yum -y install wget
wget -c http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
openstack image create --disk-format qcow2 --file cirros-0.3.4-x86_64-disk.img --protected --public cirros-0.3.4-x86_64-disk
openstack image list
ls -lh /opt/stack/data/glance/images/


11. Verifikasi RBD pool images di node ceph-mon

ssh -l stack ceph-mon "rbd -p images ls"


12. Launch instance openstack server list
openstack flavor list
openstack image list
neutron net-list
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-disk --nic net-id=[NET-EXT-ID] instance0
openstack server list
ls -lh/opt/stack/data/nova/instances/YYYYYYYYYYYYYYYYYYYY

13. Verifikasi RBD pool vms di node ceph-mon
ssh -l stack ceph-mon "rbd -p vms ls"

11 September 2016

Membuat Ceph Cluster di openSUSE Leap 42

Referensi:
http://docs.ceph.com

Kebutuhan:
  1. VirtualBox atau QEMU+KVM+VirtManager
  2. 3 VM openSUSE Leap 42 instalasi minimal. RAM 512MB, partisi / 12 GB, partisi swap 6 GB. Node ceph-osd0 dan ceph-osd1 tambahkan harddisk kedua 10GB (sdb/vdb).
  3. Koneksi internet dengan kuota tanpa batas
Topologi:


Langkah-langkah:

I. Eksekusi di semua node

1. Konfigurasi jaringan:
# vim /etc/sysconfig/network/ifcfg-eth0 
BOOTPROTO='static'
IPADDR='10.10.10.XX/24'
NAME='eth0'
STARTMODE='auto'

# vim /etc/sysconfig/network/routes
default 10.10.10.1 - - 

# vim /etc/resolv.conf
nameserver 10.10.10.1

# wicked ifup all
# ip link
# ip add
# ip route
# ping yahoo.com


2. Repositori
# curl -o /etc/zypp/repos.d/ceph.repo http://download.opensuse.org/repositories/filesystems:/ceph:/jewel/openSUSE_Leap_42.1/filesystems:ceph:jewel.repo
# zypper --gpg-auto-import-keys ref && zypper -n up --skip-interactive


3. Resolusi nama node
# vim /etc/hosts
10.10.10.10 openstack
10.10.10.30 ceph-mon
10.10.10.31 ceph-osd0
10.10.10.32 ceph-osd1


4. Set sudo tanpa TTY
# visudo
Defaults:stack !requiretty


5. Membuat user sudoer stack
# useradd -d /home/stack -m stack
# passwd stack

# echo "stack ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/stack
# chmod 0440 /etc/sudoers.d/stack


II. Eksekusi di node ceph-mon

1. Pasang paket ceph-deploy
# zypper -n in ceph-deploy


2. Generate SSH key dan salin ke node lainnya
$ su - stack
$ ssh-keygen
$ ssh-copy-id -i ~/.ssh/id_rsa.pub 10.10.10.10
$ ssh-copy-id -i ~/.ssh/id_rsa.pub 10.10.10.30
$ ssh-copy-id -i ~/.ssh/id_rsa.pub 10.10.10.31
$ ssh-copy-id -i ~/.ssh/id_rsa.pub 10.10.10.32


3. Edit file konfigurasi ssh user ceph-deploy
$ vi ~/.ssh/config
Host openstack
   Hostname openstack
   User stack
Host ceph-mon
   Hostname ceph-mon
   User stack
Host ceph-osd0
   Hostname ceph-osd0
   User stack
Host ceph-osd1
   Hostname ceph-osd1
   User stack

$ chmod 644 ~/.ssh/config


4. Membuat folder konfigurasi
$ mkdir ceph-cluster
$ cd ceph-cluster


5. Membuat cluster
$ ceph-deploy new ceph-mon
$ ls -lh


6. Set jumlah replika 2
$ echo "osd pool default size = 2" >> ceph.conf
$ echo "rbd default features = 1" >> ceph.conf


7. Instal ceph
$ ceph-deploy install ceph-mon ceph-osd0 ceph-osd1


8. Membuat initial monitor
$ ceph-deploy mon create-initial
$ ls -lh


9. Menambahkan OSD
$ ceph-deploy osd prepare ceph-osd0:/dev/vdb:/dev/ssd ceph-osd1:/dev/vdb:/dev/ssd
$ ceph-deploy osd activate ceph-osd0:/dev/vdb1:/dev/ssd1 ceph-osd1:/dev/vdb1:/dev/ssd2


10. Salin konfigurasi dan key admin ke semua node
$ ceph-deploy admin ceph-mon ceph-osd0 ceph-osd1


11. Set permission kunci admin
$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring


12. Verifikasi
$ ceph health (Pastikan status health: HEALTH_OK)
$ ceph -w
$ ceph df
$ ceph status
$ ceph -s
$ ceph osd stat
$ ceph osd dump
$ ceph osd tree
$ ceph mon stat
$ ceph mon dump
$ ceph quorum_status
$ ceph auth list
$ ceph auth get client.admin
$ ceph auth export client.admin

13. Test operasi object data
$ ceph osd pool create pool-test1 128
$ echo test > filetest1.txt
$ rados put object-test1 filetest1.txt --pool=pool-test1
$ rados ls --pool=pool-test1

$ ssh ceph-osd0
$ sudo find /var/lib/ceph/osd/ceph-0 -name *object-test1*
$ cat [NAMAFILE]
$ sudo vim /etc/fstab
/dev/vdb1 /var/lib/ceph/osd/ceph-0 xfs defaults 0 0
$ exit

$ ssh ceph-osd1
$ sudo find /var/lib/ceph/osd/ceph-1 -name *object-test1*
$ cat [NAMAFILE]
$ sudo vim /etc/fstab
/dev/vdb1 /var/lib/ceph/osd/ceph-1 xfs defaults 0 0
$ exit

$ rados ls --pool=pool-test1
$ rados rm object-test1 --pool=pool-test1
$ rados ls --pool=pool-test1
$ ceph osd pool delete pool-test1 pool-test1  --yes-i-really-really-mean-it

10 September 2016

Latihan Pasang Single Node OpenStack dengan DevStack di openSUSE Leap 42

Referensi:
http://docs.openstack.org/developer/devstack/index.html


Kebutuhan:
  1. VirtualBox atau QEMU+KVM+VirtManager
  2. VM openSUSE Leap 42 instalasi minimal (RAM 6 GB, partisi / 12 GB, partisi swap 6 GB)
  3. Koneksi internet dengan kuota tanpa batas

Topologi:



Langkah-langkah:

1. Konfigurasi jaringan:
# vim /etc/sysconfig/network/ifcfg-eth0 
BOOTPROTO='static'
IPADDR='10.10.10.10/24'
NAME='eth0'
STARTMODE='auto'

# vim /etc/sysconfig/network/routes
default 10.10.10.1 - - 

# vim /etc/resolv.conf
nameserver 10.10.10.1

# wicked ifup all
# ip link
# ip add
# ip route
# ping yahoo.com

# zypper -n in openvswitch-switch
# systemctl enable openvswitch
# systemctl start openvswitch
# systemctl status openvswitch

# vim /etc/sysconfig/network/ifcfg-eth1
BOOTPROTO='none'
NAME='eth1'
STARTMODE='auto'

# vim /etc/sysconfig/network/ifcfg-br-ex
STARTMODE='auto'
BOOTPROTO=static
IPADDR='172.16.10.10/24'
OVS_BRIDGE='yes'
OVS_BRIDGE_PORT_DEVICE='eth1'

# wicked ifup all
# ip link
# ip add
# hostnamectl set-hostname openstack
# echo "10.10.10.10 openstack" >> /etc/hosts


2. Pasang manual paket rabbitmq-server
# zypper -n in --no-recommends rabbitmq-server
# systemctl enable epmd.service
# systemctl restart epmd.service
# systemctl status epmd.service
# systemctl enable rabbitmq-server.service
# systemctl restart rabbitmq-server.service
# systemctl status rabbitmq-server.service


3. Pasang manual MySQL
# zypper -n in mysql-community-server mysql-community-server-client
# systemctl enable mysql.service
# systemctl restart mysql.service
# systemctl status mysql.service
# /usr/bin/mysqladmin -u root password 'rahasia'
# mysql -prahasia -e "SET PASSWORD FOR 'root'@'openstack' = PASSWORD('rahasia');"


4. Pasang paket git
# zypper -n in git python-virtualenv python-pip


5. Membuat user sudoer stack
# useradd -d /home/stack -m stack
# passwd stack

# echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers


6. Konfigurasi dan eksekusi DevStack
# su - stack
$ git clone https://git.openstack.org/openstack-dev/devstack
$ cd devstack
$ git checkout stable/mitaka
$ git branch
$ vim local.conf
[[local|localrc]]
FORCE=yes
HOST_IP=10.10.10.10
SERVICE_HOST=10.10.10.10
MYSQL_HOST=10.10.10.10
RABBIT_HOST=10.10.10.10
GLANCE_HOSTPORT=10.10.10.10:9292
PUBLIC_INTERFACE=eth1
ADMIN_PASSWORD=rahasia
MYSQL_PASSWORD=rahasia
RABBIT_PASSWORD=rahasia
SERVICE_PASSWORD=rahasia

## Neutron options
Q_USE_SECGROUP=True
PHYSICAL_NETWORK=provider
OVS_PHYSICAL_BRIDGE=br-ex
Q_USE_PROVIDER_NETWORKING=True

## Do not use Nova-Network
disable_service n-net

## Neutron
ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-agt,n-cpu,rabbit

## Neutron Networking options used to create Neutron Subnets
FIXED_RANGE="172.16.10.0/24"
NETWORK_GATEWAY=172.16.10.1
PROVIDER_SUBNET_NAME="subnet-provider"
PROVIDER_NETWORK_TYPE="flat"

$ ./stack.sh


7. Test tampilkan layanan OpenStack
$ vim openrc
export OS_PASSWORD=${ADMIN_PASSWORD:-rahasia}

$ source openrc admin admin
$ openstack service list

8. Akses dashboard dengan peramban web di alamat http://10.10.10.10/dashboard/