Feb 12, 2015

glusterFS HADR file-based implementation

High Availability - Disaster  Recovery

In this writing, i want to focus in deploying some apps such as web or other apps that serve more than 10.000 time per second. Especially in telco industries, we always challenge to provide huge number availability process of apps. well, today i want to share about simple implementation about that. 

There are two parts on this terms, one is :

High Availability 

is you need to provide huge number can serve per second. You need to double your apps instance as many as you can, then put loadbalancer above it. 
Let say, you develop on RoR or struts, you can measure with apache AB, then double it as number of availability you want. 

Disaster Recovery 

is ability to survive / to always keep serving even one node is down without any gap among nodes.




GlusterFS

This is sync-ed file system, while one node change the file content, it automatically change the side node.

It make everything seems simple, because we just setup one instance, then put configuration file, database file and even file repository on one time only.

Implementation

on my lab, i just use 4 vm(s) and put special hardrive to store / as glusterFS node.

put it on each host : /etc/hosts

192.168.43.205  f1
192.168.43.206  f2


install glusterFS server on active node

sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.4
sudo apt-get update
sudo apt-get install python-software-properties glusterfs-server 



install glusterFS client on passive node 

sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.4
sudo apt-get update
sudo apt-get install python-software-properties glusterfs-client

add hard drive to vmware host


sudo su
hosts='/sys/class/scsi_host'
for i in `ls $hosts`;
do
echo "- - -" > ${hosts}/${i}/scan
done


sudo lshw -short -c disk

H/W path             Device      Class       Description
========================================================
/0/100/10/0.0.0      /dev/sda    disk        21GB SCSI Disk
/0/100/10/0.1.0      /dev/sdb    disk        21GB SCSI Disk
/0/1/0.0.0           /dev/cdrom  disk        DVD-RAM writer

sudo mkfs.xfs /dev/sdb
sudo mkdir /mnt/mypoint
sudo mount /dev/sdb /mnt/mypoint




auto mount glusterFS

Add to fstab :  /etc/fstab
/dev/sdb       /mnt/mypoint        xfs defaults    0 0



create volume


sudo gluster volume create bulkdata replica 2 transport tcp f1:/mnt/mypoint f2:/mnt/mypoint force
sudo gluster volume create bulkconfig replica 2 transport tcp f1:/var/confbrick f2:/var/confbrick force
sudo gluster volume start bulkdata
sudo gluster volume start bulkconfig
sudo gluster volume set volume1 auth.allow gluster_passive_IP_addr
(F2)





.

High Availability - Disaster  Recovery

In this writing, i want to focus in deploying some apps such as web or other apps that serve more than 10.000 time per second. Especially in telco industries, we always challenge to provide huge number availability process of apps. well, today i want to share about simple implementation about that. 

There are two parts on this terms, one is :

High Availability 

is you need to provide huge number can serve per second. You need to double your apps instance as many as you can, then put loadbalancer above it. 
Let say, you develop on RoR or struts, you can measure with apache AB, then double it as number of availability you want. 

Disaster Recovery 

is ability to survive / to always keep serving even one node is down without any gap among nodes.




GlusterFS

This is sync-ed file system, while one node change the file content, it automatically change the side node.

It make everything seems simple, because we just setup one instance, then put configuration file, database file and even file repository on one time only.

Implementation

on my lab, i just use 4 vm(s) and put special hardrive to store / as glusterFS node.

put it on each host : /etc/hosts

192.168.43.205  f1
192.168.43.206  f2


install glusterFS server on active node

sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.4
sudo apt-get update
sudo apt-get install python-software-properties glusterfs-server 



install glusterFS client on passive node 

sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.4
sudo apt-get update
sudo apt-get install python-software-properties glusterfs-client

add hard drive to vmware host


sudo su
hosts='/sys/class/scsi_host'
for i in `ls $hosts`;
do
echo "- - -" > ${hosts}/${i}/scan
done


sudo lshw -short -c disk

H/W path             Device      Class       Description
========================================================
/0/100/10/0.0.0      /dev/sda    disk        21GB SCSI Disk
/0/100/10/0.1.0      /dev/sdb    disk        21GB SCSI Disk
/0/1/0.0.0           /dev/cdrom  disk        DVD-RAM writer

sudo mkfs.xfs /dev/sdb
sudo mkdir /mnt/mypoint
sudo mount /dev/sdb /mnt/mypoint




auto mount glusterFS

Add to fstab :  /etc/fstab
/dev/sdb       /mnt/mypoint        xfs defaults    0 0



create volume


sudo gluster volume create bulkdata replica 2 transport tcp f1:/mnt/mypoint f2:/mnt/mypoint force
sudo gluster volume create bulkconfig replica 2 transport tcp f1:/var/confbrick f2:/var/confbrick force
sudo gluster volume start bulkdata
sudo gluster volume start bulkconfig
sudo gluster volume set volume1 auth.allow gluster_passive_IP_addr
(F2)




No comments:

Post a Comment