11 September 2016

Membuat Ceph Cluster di openSUSE Leap 42

Referensi:
http://docs.ceph.com

Kebutuhan:
  1. VirtualBox atau QEMU+KVM+VirtManager
  2. 3 VM openSUSE Leap 42 instalasi minimal. RAM 512MB, partisi / 12 GB, partisi swap 6 GB. Node ceph-osd0 dan ceph-osd1 tambahkan harddisk kedua 10GB (sdb/vdb).
  3. Koneksi internet dengan kuota tanpa batas
Topologi:


Langkah-langkah:

I. Eksekusi di semua node

1. Konfigurasi jaringan:
# vim /etc/sysconfig/network/ifcfg-eth0 
BOOTPROTO='static'
IPADDR='10.10.10.XX/24'
NAME='eth0'
STARTMODE='auto'

# vim /etc/sysconfig/network/routes
default 10.10.10.1 - - 

# vim /etc/resolv.conf
nameserver 10.10.10.1

# wicked ifup all
# ip link
# ip add
# ip route
# ping yahoo.com


2. Repositori
# curl -o /etc/zypp/repos.d/ceph.repo http://download.opensuse.org/repositories/filesystems:/ceph:/jewel/openSUSE_Leap_42.1/filesystems:ceph:jewel.repo
# zypper --gpg-auto-import-keys ref && zypper -n up --skip-interactive


3. Resolusi nama node
# vim /etc/hosts
10.10.10.10 openstack
10.10.10.30 ceph-mon
10.10.10.31 ceph-osd0
10.10.10.32 ceph-osd1


4. Set sudo tanpa TTY
# visudo
Defaults:stack !requiretty


5. Membuat user sudoer stack
# useradd -d /home/stack -m stack
# passwd stack

# echo "stack ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/stack
# chmod 0440 /etc/sudoers.d/stack


II. Eksekusi di node ceph-mon

1. Pasang paket ceph-deploy
# zypper -n in ceph-deploy


2. Generate SSH key dan salin ke node lainnya
$ su - stack
$ ssh-keygen
$ ssh-copy-id -i ~/.ssh/id_rsa.pub 10.10.10.10
$ ssh-copy-id -i ~/.ssh/id_rsa.pub 10.10.10.30
$ ssh-copy-id -i ~/.ssh/id_rsa.pub 10.10.10.31
$ ssh-copy-id -i ~/.ssh/id_rsa.pub 10.10.10.32


3. Edit file konfigurasi ssh user ceph-deploy
$ vi ~/.ssh/config
Host openstack
   Hostname openstack
   User stack
Host ceph-mon
   Hostname ceph-mon
   User stack
Host ceph-osd0
   Hostname ceph-osd0
   User stack
Host ceph-osd1
   Hostname ceph-osd1
   User stack

$ chmod 644 ~/.ssh/config


4. Membuat folder konfigurasi
$ mkdir ceph-cluster
$ cd ceph-cluster


5. Membuat cluster
$ ceph-deploy new ceph-mon
$ ls -lh


6. Set jumlah replika 2
$ echo "osd pool default size = 2" >> ceph.conf
$ echo "rbd default features = 1" >> ceph.conf


7. Instal ceph
$ ceph-deploy install ceph-mon ceph-osd0 ceph-osd1


8. Membuat initial monitor
$ ceph-deploy mon create-initial
$ ls -lh


9. Menambahkan OSD
$ ceph-deploy osd prepare ceph-osd0:/dev/vdb:/dev/ssd ceph-osd1:/dev/vdb:/dev/ssd
$ ceph-deploy osd activate ceph-osd0:/dev/vdb1:/dev/ssd1 ceph-osd1:/dev/vdb1:/dev/ssd2


10. Salin konfigurasi dan key admin ke semua node
$ ceph-deploy admin ceph-mon ceph-osd0 ceph-osd1


11. Set permission kunci admin
$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring


12. Verifikasi
$ ceph health (Pastikan status health: HEALTH_OK)
$ ceph -w
$ ceph df
$ ceph status
$ ceph -s
$ ceph osd stat
$ ceph osd dump
$ ceph osd tree
$ ceph mon stat
$ ceph mon dump
$ ceph quorum_status
$ ceph auth list
$ ceph auth get client.admin
$ ceph auth export client.admin

13. Test operasi object data
$ ceph osd pool create pool-test1 128
$ echo test > filetest1.txt
$ rados put object-test1 filetest1.txt --pool=pool-test1
$ rados ls --pool=pool-test1

$ ssh ceph-osd0
$ sudo find /var/lib/ceph/osd/ceph-0 -name *object-test1*
$ cat [NAMAFILE]
$ sudo vim /etc/fstab
/dev/vdb1 /var/lib/ceph/osd/ceph-0 xfs defaults 0 0
$ exit

$ ssh ceph-osd1
$ sudo find /var/lib/ceph/osd/ceph-1 -name *object-test1*
$ cat [NAMAFILE]
$ sudo vim /etc/fstab
/dev/vdb1 /var/lib/ceph/osd/ceph-1 xfs defaults 0 0
$ exit

$ rados ls --pool=pool-test1
$ rados rm object-test1 --pool=pool-test1
$ rados ls --pool=pool-test1
$ ceph osd pool delete pool-test1 pool-test1  --yes-i-really-really-mean-it

No comments: