30 September 2016

Integrasi Ceph Block Device dan OpenStack di openSUSE Leap 42

Referensi:
http://docs.ceph.com

Kebutuhan:
  1. Single Node OpenStack dengan DevStack di openSUSE Leap 42
  2. Ceph Cluster di openSUSE Leap 42
 Topologi:

 Langkah-langkah:

I. Membuat RBD (Eksekusi di Node ceph-mon)

1. Login sebagai stack
su - stack

2. Membuat pool
ceph osd pool create volumes 128
ceph osd pool create images 128
ceph osd pool create vms 128
ceph osd pool ls

3. Instal paket ceph client  di node openstack
ssh openstack sudo zypper -n install python-rbd ceph-common

4. Salin berkas ceph.conf
cat /etc/ceph/ceph.conf | ssh openstack sudo tee /etc/ceph/ceph.conf

5. Membuat user cinder dan glance
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

6. Tambahkan keyring client.glance dan client.cinder ke node openstack
ceph auth get-or-create client.glance | ssh openstack sudo tee /etc/ceph/ceph.client.glance.keyring
ssh openstack sudo chown stack:users /etc/ceph/ceph.client.glance.keyring

ceph auth get-or-create client.cinder | ssh openstack sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh openstack sudo chown stack:users /etc/ceph/ceph.client.cinder.keyring

ceph auth get-key client.cinder | ssh openstack tee client.cinder.key


II. OpenStack Glance, Cinder, Nova (Eksekusi di node openstack)

1. Login sebagai user stack
su -  stack


2. Hapus semua image, volume dan instance.


3. Acak dan tambahkan secret key ke libvirt.

##### Generate UUID, lewati saja ##### ###
#uuidgen
#457eb676-33da-42ec-9a8c-9293d545c337
################################

cat > secret.xml <
  457eb676-33da-42ec-9a8c-9293d545c337
 
    client.cinder secret
 

EOF


sudo virsh secret-define --file secret.xml
sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat /home/stack/client.cinder.key)


4. Konfigurasi glance 

[ ! -f /opt/stack/glance/etc/glance-api.conf.orig ] && cp -v /opt/stack/glance/etc/glance-api.conf /opt/stack/glance/etc/glance-api.conf.orig
vim /opt/stack/glance/etc/glance-api.conf

[DEFAULT]
...
show_image_direct_url = True

[glance_store]

#filesystem_store_datadir = /opt/stack/data/glance/images/
stores = rbd,http
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8

Restart service Glance API di screen stack (g-api)


5. Konfigurasi cinder
[ ! -f /etc/cinder/cinder.conf.orig ] && cp -v /etc/cinder/cinder.conf /etc/cinder/cinder.conf.orig
vi /etc/cinder/cinder.conf

[DEFAULT]

#default_volume_type = lvmdriver-1
#enabled_backends = lvmdriver-1
default_volume_type =ceph
enabled_backends = ceph 
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true

#[lvmdriver-1]
#lvm_type = default
#iscsi_helper = tgtadm
#volume_group = stack-volumes-lvmdriver-1
#volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
#volume_backend_name = lvmdriver-1

[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337

Restart service Cinder API (c-api)
Restart service Cinder Scheduler (c-sch)
Restart service Cinder Volume (c-vol)


6. Verifikasi
cd ~/devstack
source openrc admin admin
cinder service-list
cinder service-disable openstack@lvmdriver-1 cinder-volume
cinder service-list

mysql -u root -prahasia -e "update services set deleted = 1 where host like 'openstack@lvm' and disabled = 1" cinder
cinder service-list


7. Uji membuat volume
openstack volume list
openstack volume create --size 1 vol0
openstack volume list
lvs


8. Verifikasi RBD pool volumes di node ceph-mon
ssh -l stack ceph-mon "rbd -p volumes ls"



9. Konfigurasi nova
sudo vim /etc/ceph/ceph.conf 

[client]
rbd cache = true
rbd cache writethrough until flush = true
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
rbd concurrent management ops = 20


sudo mkdir -p /var/run/ceph/guests/ /var/log/qemu/
sudo chown qemu:libvirt /var/run/ceph/guests /var/log/qemu/

sudo vim /etc/nova/nova.conf

[libvirt]
...
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
disk_cachemodes="network=writeback"
inject_password = false
inject_key = false
inject_partition = -2
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
hw_disk_discard = unmap

Restart service Nova Compute (n-cpu)

10. Create images
openstack image list
yum -y install wget
wget -c http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
openstack image create --disk-format qcow2 --file cirros-0.3.4-x86_64-disk.img --protected --public cirros-0.3.4-x86_64-disk
openstack image list
ls -lh /opt/stack/data/glance/images/


11. Verifikasi RBD pool images di node ceph-mon

ssh -l stack ceph-mon "rbd -p images ls"


12. Launch instance openstack server list
openstack flavor list
openstack image list
neutron net-list
openstack server create --flavor m1.tiny --image cirros-0.3.4-x86_64-disk --nic net-id=[NET-EXT-ID] instance0
openstack server list
ls -lh/opt/stack/data/nova/instances/YYYYYYYYYYYYYYYYYYYY

13. Verifikasi RBD pool vms di node ceph-mon
ssh -l stack ceph-mon "rbd -p vms ls"

11 September 2016

Membuat Ceph Cluster di openSUSE Leap 42

Referensi:
http://docs.ceph.com

Kebutuhan:
  1. VirtualBox atau QEMU+KVM+VirtManager
  2. 3 VM openSUSE Leap 42 instalasi minimal. RAM 512MB, partisi / 12 GB, partisi swap 6 GB. Node ceph-osd0 dan ceph-osd1 tambahkan harddisk kedua 10GB (sdb/vdb).
  3. Koneksi internet dengan kuota tanpa batas
Topologi:


Langkah-langkah:

I. Eksekusi di semua node

1. Konfigurasi jaringan:
# vim /etc/sysconfig/network/ifcfg-eth0 
BOOTPROTO='static'
IPADDR='10.10.10.XX/24'
NAME='eth0'
STARTMODE='auto'

# vim /etc/sysconfig/network/routes
default 10.10.10.1 - - 

# vim /etc/resolv.conf
nameserver 10.10.10.1

# wicked ifup all
# ip link
# ip add
# ip route
# ping yahoo.com


2. Repositori
# curl -o /etc/zypp/repos.d/ceph.repo http://download.opensuse.org/repositories/filesystems:/ceph:/jewel/openSUSE_Leap_42.1/filesystems:ceph:jewel.repo
# zypper --gpg-auto-import-keys ref && zypper -n up --skip-interactive


3. Resolusi nama node
# vim /etc/hosts
10.10.10.10 openstack
10.10.10.30 ceph-mon
10.10.10.31 ceph-osd0
10.10.10.32 ceph-osd1


4. Set sudo tanpa TTY
# visudo
Defaults:stack !requiretty


5. Membuat user sudoer stack
# useradd -d /home/stack -m stack
# passwd stack

# echo "stack ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/stack
# chmod 0440 /etc/sudoers.d/stack


II. Eksekusi di node ceph-mon

1. Pasang paket ceph-deploy
# zypper -n in ceph-deploy


2. Generate SSH key dan salin ke node lainnya
$ su - stack
$ ssh-keygen
$ ssh-copy-id -i ~/.ssh/id_rsa.pub 10.10.10.10
$ ssh-copy-id -i ~/.ssh/id_rsa.pub 10.10.10.30
$ ssh-copy-id -i ~/.ssh/id_rsa.pub 10.10.10.31
$ ssh-copy-id -i ~/.ssh/id_rsa.pub 10.10.10.32


3. Edit file konfigurasi ssh user ceph-deploy
$ vi ~/.ssh/config
Host openstack
   Hostname openstack
   User stack
Host ceph-mon
   Hostname ceph-mon
   User stack
Host ceph-osd0
   Hostname ceph-osd0
   User stack
Host ceph-osd1
   Hostname ceph-osd1
   User stack

$ chmod 644 ~/.ssh/config


4. Membuat folder konfigurasi
$ mkdir ceph-cluster
$ cd ceph-cluster


5. Membuat cluster
$ ceph-deploy new ceph-mon
$ ls -lh


6. Set jumlah replika 2
$ echo "osd pool default size = 2" >> ceph.conf
$ echo "rbd default features = 1" >> ceph.conf


7. Instal ceph
$ ceph-deploy install ceph-mon ceph-osd0 ceph-osd1


8. Membuat initial monitor
$ ceph-deploy mon create-initial
$ ls -lh


9. Menambahkan OSD
$ ceph-deploy osd prepare ceph-osd0:/dev/vdb:/dev/ssd ceph-osd1:/dev/vdb:/dev/ssd
$ ceph-deploy osd activate ceph-osd0:/dev/vdb1:/dev/ssd1 ceph-osd1:/dev/vdb1:/dev/ssd2


10. Salin konfigurasi dan key admin ke semua node
$ ceph-deploy admin ceph-mon ceph-osd0 ceph-osd1


11. Set permission kunci admin
$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring


12. Verifikasi
$ ceph health (Pastikan status health: HEALTH_OK)
$ ceph -w
$ ceph df
$ ceph status
$ ceph -s
$ ceph osd stat
$ ceph osd dump
$ ceph osd tree
$ ceph mon stat
$ ceph mon dump
$ ceph quorum_status
$ ceph auth list
$ ceph auth get client.admin
$ ceph auth export client.admin

13. Test operasi object data
$ ceph osd pool create pool-test1 128
$ echo test > filetest1.txt
$ rados put object-test1 filetest1.txt --pool=pool-test1
$ rados ls --pool=pool-test1

$ ssh ceph-osd0
$ sudo find /var/lib/ceph/osd/ceph-0 -name *object-test1*
$ cat [NAMAFILE]
$ sudo vim /etc/fstab
/dev/vdb1 /var/lib/ceph/osd/ceph-0 xfs defaults 0 0
$ exit

$ ssh ceph-osd1
$ sudo find /var/lib/ceph/osd/ceph-1 -name *object-test1*
$ cat [NAMAFILE]
$ sudo vim /etc/fstab
/dev/vdb1 /var/lib/ceph/osd/ceph-1 xfs defaults 0 0
$ exit

$ rados ls --pool=pool-test1
$ rados rm object-test1 --pool=pool-test1
$ rados ls --pool=pool-test1
$ ceph osd pool delete pool-test1 pool-test1  --yes-i-really-really-mean-it

10 September 2016

Latihan Pasang Single Node OpenStack dengan DevStack di openSUSE Leap 42

Referensi:
http://docs.openstack.org/developer/devstack/index.html


Kebutuhan:
  1. VirtualBox atau QEMU+KVM+VirtManager
  2. VM openSUSE Leap 42 instalasi minimal (RAM 6 GB, partisi / 12 GB, partisi swap 6 GB)
  3. Koneksi internet dengan kuota tanpa batas

Topologi:



Langkah-langkah:

1. Konfigurasi jaringan:
# vim /etc/sysconfig/network/ifcfg-eth0 
BOOTPROTO='static'
IPADDR='10.10.10.10/24'
NAME='eth0'
STARTMODE='auto'

# vim /etc/sysconfig/network/routes
default 10.10.10.1 - - 

# vim /etc/resolv.conf
nameserver 10.10.10.1

# wicked ifup all
# ip link
# ip add
# ip route
# ping yahoo.com

# zypper -n in openvswitch-switch
# systemctl enable openvswitch
# systemctl start openvswitch
# systemctl status openvswitch

# vim /etc/sysconfig/network/ifcfg-eth1
BOOTPROTO='none'
NAME='eth1'
STARTMODE='auto'

# vim /etc/sysconfig/network/ifcfg-br-ex
STARTMODE='auto'
BOOTPROTO=static
IPADDR='172.16.10.10/24'
OVS_BRIDGE='yes'
OVS_BRIDGE_PORT_DEVICE='eth1'

# wicked ifup all
# ip link
# ip add
# hostnamectl set-hostname openstack
# echo "10.10.10.10 openstack" >> /etc/hosts


2. Pasang manual paket rabbitmq-server
# zypper -n in --no-recommends rabbitmq-server
# systemctl enable epmd.service
# systemctl restart epmd.service
# systemctl status epmd.service
# systemctl enable rabbitmq-server.service
# systemctl restart rabbitmq-server.service
# systemctl status rabbitmq-server.service


3. Pasang manual MySQL
# zypper -n in mysql-community-server mysql-community-server-client
# systemctl enable mysql.service
# systemctl restart mysql.service
# systemctl status mysql.service
# /usr/bin/mysqladmin -u root password 'rahasia'
# mysql -prahasia -e "SET PASSWORD FOR 'root'@'openstack' = PASSWORD('rahasia');"


4. Pasang paket git
# zypper -n in git python-virtualenv python-pip


5. Membuat user sudoer stack
# useradd -d /home/stack -m stack
# passwd stack

# echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers


6. Konfigurasi dan eksekusi DevStack
# su - stack
$ git clone https://git.openstack.org/openstack-dev/devstack
$ cd devstack
$ git checkout stable/mitaka
$ git branch
$ vim local.conf
[[local|localrc]]
FORCE=yes
HOST_IP=10.10.10.10
SERVICE_HOST=10.10.10.10
MYSQL_HOST=10.10.10.10
RABBIT_HOST=10.10.10.10
GLANCE_HOSTPORT=10.10.10.10:9292
PUBLIC_INTERFACE=eth1
ADMIN_PASSWORD=rahasia
MYSQL_PASSWORD=rahasia
RABBIT_PASSWORD=rahasia
SERVICE_PASSWORD=rahasia

## Neutron options
Q_USE_SECGROUP=True
PHYSICAL_NETWORK=provider
OVS_PHYSICAL_BRIDGE=br-ex
Q_USE_PROVIDER_NETWORKING=True

## Do not use Nova-Network
disable_service n-net

## Neutron
ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-agt,n-cpu,rabbit

## Neutron Networking options used to create Neutron Subnets
FIXED_RANGE="172.16.10.0/24"
NETWORK_GATEWAY=172.16.10.1
PROVIDER_SUBNET_NAME="subnet-provider"
PROVIDER_NETWORK_TYPE="flat"

$ ./stack.sh


7. Test tampilkan layanan OpenStack
$ vim openrc
export OS_PASSWORD=${ADMIN_PASSWORD:-rahasia}

$ source openrc admin admin
$ openstack service list

8. Akses dashboard dengan peramban web di alamat http://10.10.10.10/dashboard/

05 June 2016

Latihan Membuat Cluster Docker dengan Docker Swarm di openSUSE Leap 42

Referensi:


Topologi:

Langkah-langkah:
1. Di semua node pasang docker engine. Sesuaikan alamat IP untuk opsi --cluster-advertise dengan alamat IP masing-masing node.
# zypper in -y docker
# vim /etc/sysconfig/docker
..........
DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=consul://10.10.10.10:8500 --cluster-advertise=10.10.10.XXX:2375"

# systemctl start docker.service
# systemctl enable docker.service
# systemctl status docker.service

2. Di node0 jalankan consul sebagai discovery backend.
# docker run -d --restart=always --name=consul -h consul -p 8500:8500 progrium/consul -server -bootstrap

3. Di node0 jalankan swarm manager (primary manager).
# docker run -d --restart=always --name swarm-manager -h swarm-manager -p 4000:4000 swarm manage -H :4000 --replication --advertise 10.10.10.10:4000 consul://10.10.10.10:8500

4.  Di node1 dan node2 jalankan swarm untuk bergabung ke cluster. Sesuaikan nama node dan alamat IP untuk opsi --name, -h, --advertise dengan nama dan alamat IP masing-masing node.
# docker run -d --restart=always --name=swarm-nodeX -h swarm-nodeX swarm join --advertise=10.10.10.XXX:2375 consul://10.10.10.10:8500

5. Di node0 verifikasi cluster.
# docker -H :4000 info
Containers: 2
Images: 2
Server Version: swarm/1.2.3
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 2
 node1: 10.10.10.11:2375
  └ ID: PK2P:YTKT:Z5KW:OFBL:FTFI:X7GV:SY2L:T7CZ:EJ72:6ZDO:SZQV:AWR5
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 4.053 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=4.1.21-14-default, operatingsystem=openSUSE Leap 42.1 (x86_64), storagedriver=devicemapper
  └ UpdatedAt: 2016-06-05T05:05:16Z
  └ ServerVersion: 1.9.1
 node2: 10.10.10.12:2375
  └ ID: LUBR:4MRH:FJHK:6USE:B7GP:JOYU:HMAO:25ZF:IMVZ:65VA:QAN5:RJ47
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 4.053 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=4.1.21-14-default, operatingsystem=openSUSE Leap 42.1 (x86_64), storagedriver=devicemapper
  └ UpdatedAt: 2016-06-05T05:04:47Z
  └ ServerVersion: 1.9.1
Kernel Version: 4.1.21-14-default
Operating System: linux
CPUs: 8
Total Memory: 8.105 GiB
Name: swarm-manager

6. Di node0 buat overlay network untuk komunikasi antar container sekalipun berada di node berbeda.
# docker -H :4000 network create --driver overlay --subnet=172.31.0.0/16 overlay
# docker -H :4000 network ls
# docker -H :4000 network inspect overlay

7. Di node0 uji jalankan 2 container busybox dan uji konektivitas antar container melalui overlay network.
# docker -H :4000 run -d --name busybox1 -h busybox1 --net=overlay busybox init
# docker -H :4000 run -d --name busybox2 -h busybox2 --net=overlay busybox init
# docker -H :4000 ps -a
# docker -H :4000 inspect busybox2 | grep "172.31."
# docker -H :4000 exec -ti busybox1 ping -c 5 172.31.XXX.XXX

8. Di node0 jalankan shipyard untuk manajemen cluster melalui antarmuka web.
# docker run -ti -d --restart=always -h shipyard-rethinkdb --name shipyard-rethinkdb rethinkdb
# docker run -ti -d --restart=always --name shipyard-controller -h shipyard-controller --link shipyard-rethinkdb:rethinkdb --link swarm-manager:swarm -p 8080:8080 shipyard/shipyard:latest server -d tcp://swarm:4000

9. Buka peramban web dan akses ke alamat http://10.10.10.10:8080. Masuk dengan nama pengguna: admin dan kata sandi: shipyard.

16 December 2015

openSUSE Packed Me to Taipei

It begins when I read about the news release of openSUSE Leap 42.1 RC1, I began to find out more deeply about Leap. I thought, Leap is a big change in the history of openSUSE distribution release and I enthusiastic to know about it. As the system administration consultant, earlier I only use and recommend Debian, Ubuntu LTS and CentOS to corporate customers. However today, I am going to always advise them to use openSUSE Leap as the main operating system. My opinion comes as openSUSE Leap distribution packaging is the best for the company. Based on this, I was immediately installed it on my laptop. Effortlessly, I discovered openSUSE Indonesian Community Facebook group. I joined and socialize with other openSUSE users.




The next morning I received an email from the committee of openSUSE.Asia Summit 2015 just to find out that the proposed proposal I wrote about the Building IaaS Cloud with openSUSE and OpenStack has been approved. I also had the opportunity to apply for financial support for trip from openSUSE.

I rushed apply for visa to be able to visitcountry where the openSUSE.Summit 2015 take place. Alhamdulillah, the visa issuing process are convenient and straightforward. To be able to attend the event there, I hunt online airplane tickets and lowest price-cozy inn, and yes I got it!
I went to Taipei together with two other presenters from Indonesia, Estu and Edwin. The first thing I did in Taipei, we make time to visit SUSE Labs / Novell Taiwan office. It was a great time to knowing and welcome by friends form SUSE Labs.









The mandatory friday pray for moslem man happens in friday noon.





Friday night, openSUSE Leap 42.1 Release Party!! time for pizza party is NOW!






At Saturday morning, the conference is warmly opened.
Satuday afternoon is when my show time :D


The second day. The more the merrieer. Getting crowded by people.




Trip to openSUSE.Asia Summit 2015 is truly a pleasant experience. It's not only enlighten but also encourage myself to keep use and contribute to openSUSE. I wish that the upcoming openSUSE.Asia Summit will be held in Indonesia. Thank you openSUSE.Asia Summit 2015. Thank you Taipei, Taiwan . Thank you openSUSE.


Photo source: