Feb 23, 2015

HAProxy keepalived failover mysql galera


High Availability Services

on Enterprise level, application software should be able to guarantee that its services will not down. Especially on telecom industries, we are very consider about HA (high availability) services.

This HA is applicable for all service that need to reach almost 100% service alive, can be for database with cluster, app server, or any others.

How to double app services

You can setup HA by proxying your app-intance with session stickiness to keep all session on each client running well. Can be with F5 or other opensource loadbalancer. It can be setup as round robin or fail over then.

Keepalive the load balancer

even though you create load balancer on your apps, there still have any conditions that may make your balancer going down, thats why you need to make some backup of your balancer. Thats why we need keepalived.

well, on this case, i want to setup HAProxy as balancer, then keepalived to backup the balancer for mysql services. All are opensources software, lets try :

HAProxy keepalived failover mysql galera



Let say :
h1 : 192.168.43.201
h2 : 192.168.43.202
virtual ip : 192.168.43.200

create user for mysql

mysql -u root -p
grant all on *.* to root@'%' identified by 'Passw0rd' with grant option;
insert into mysql.user (Host,User) values ('192.168.43.201','haproxy');
insert into mysql.user (Host,User) values ('192.168.43.202','haproxy');
flush privileges;
exit;


DO on BOTH :
sudo update
sudo apt-get install mysql-client keepalived haproxy -y
sudo vim /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind=1

you can check with :
sudo sysctl -p


CREATE FILE : (router_id to be the hostname)
sudo vim  /etc/keepalived/keepalived.conf

global_defs {
  router_id h1
}
vrrp_script haproxy {
  script "killall -0 haproxy"
  interval 2
  weight 2
}
vrrp_instance 50 {
  virtual_router_id 50
  advert_int 1
  priority 101
  state MASTER
  interface eth0
  virtual_ipaddress {
    192.168.43.200 dev eth0
  }
  track_script {
    haproxy
  }
}


ON NODE 2:
global_defs {
  router_id h2
}
vrrp_script haproxy {
  script "killall -0 haproxy"
  interval 2
  weight 2
}
vrrp_instance 50 {
  virtual_router_id 50
  advert_int 1
  priority 102
  state SLAVE
  interface eth0
  virtual_ipaddress {
    192.168.43.200 dev eth0
  }
  track_script {
    haproxy
  }
}



ON H1:
sudo vim /etc/haproxy/haproxy.cfg

global
        log 192.168.43.201 local0
        stats socket /var/lib/haproxy/stats
        maxconn 10000
        chroot /var/lib/haproxy
        user haproxy
        group haproxy
        daemon

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        contimeout 5000
        clitimeout 50000
        srvtimeout 50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http

listen stats 192.168.43.201:80
        mode http
        stats enable
        stats uri /stats
        stats realm HAProxy\ Statistics
        stats auth admin:Passw0rd1

ON H2:
sudo vim /etc/haproxy/haproxy.cfg

global
        log 192.168.43.202 local0
        stats socket /var/lib/haproxy/stats
        maxconn 10000
        chroot /var/lib/haproxy
        user haproxy
        group haproxy
        daemon

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        contimeout 5000
        clitimeout 50000
        srvtimeout 50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http

listen stats 192.168.43.202:80
        mode http
        stats enable
        stats uri /stats
        stats realm HAProxy\ Statistics
        stats auth admin:Passw0rd1



ON BOTH
sudo vim /etc/default/haproxy

# Set ENABLED to 1 if you want the init script to start haproxy.
ENABLED=1


ON BOTH
sudo service keepalived restart
sudo service haproxy restart


TESTING :
ip a | grep eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 192.168.43.202/24 brd 192.168.43.255 scope global eth0
    inet 192.168.43.200/32 scope global eth0

Once you’ve completed all of these steps on both nodes, you should now have a highly available load balancer pair.  At this point, our VIP should be active on one node (assuming that you built node 1 first, it should be active on that node). 



NOW SET FOR MYSQL-Galera:

 mysql -h 192.168.43.200 -u root -p

then turn off one machine, it should be swithed automatically to the next side


ADD THIS ON BOTH:

listen galera 192.168.43.200:3306
        balance source
        mode tcp
        option tcpka
        option mysql-check user haproxy
        server m1 192.168.43.203:3306 check weight 1
        server m2 192.168.43.204:3306 check weight 1
Read more ...

Feb 20, 2015

MariaDB galera cluster Ubuntu 14.04


Today its about mariaDB cluster, you need this to serve high intense request of your apps-query.

On my lab, i use 2 nodes and ubuntu as host OS. Lets try :

MariaDB galera cluster  Ubuntu 14.04


Let say :

Node 1 : 192.168.43.203
Node 4 : 192.168.43.204

DO THIS ON BOTH NODE

sudo apt-get install python-software-properties rsync
sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db
sudo add-apt-repository 'deb http://mirror.jmu.edu/pub/mariadb/repo/5.5/ubuntu precise main'


OR YOU CAN USE THIS REPO
deb http://mariadb.biz.net.id//repo/10.0/ubuntu trusty main


sudo apt-get update
sudo apt-get install mariadb-galera-server galera

sudo vim /etc/mysql/my.cnf     (comment binding)
#bind-address           = 127.0.0.1

CREATE THIS FILE ON BOTH

sudo vim /etc/mysql/conf.d/cluster.cnf

ON NODE 1:

[mysqld]
query_cache_size=0
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
query_cache_type=0
bind-address=0.0.0.0

# Galera Provider Configuration
wsrep_provider=/usr/lib/galera/libgalera_smm.so
#wsrep_provider_options="gcache.size=32G"

# Galera Cluster Configuration
wsrep_cluster_name="m_cluster"
wsrep_cluster_address="gcomm://192.168.43.203,192.168.43.204"

# Galera Synchronization Congifuration
wsrep_sst_method=rsync
#wsrep_sst_auth=user:pass

# Galera Node Configuration
wsrep_node_address="192.168.43.203"
wsrep_node_name="m1"


ON NODE 2:
sudo vim /etc/mysql/conf.d/cluster.cnf

[mysqld]
query_cache_size=0
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
query_cache_type=0
bind-address=0.0.0.0

# Galera Provider Configuration
wsrep_provider=/usr/lib/galera/libgalera_smm.so
#wsrep_provider_options="gcache.size=32G"

# Galera Cluster Configuration
wsrep_cluster_name="m_cluster"
wsrep_cluster_address="gcomm://192.168.43.203,192.168.43.204"

# Galera Synchronization Congifuration
wsrep_sst_method=rsync
#wsrep_sst_auth=user:pass

# Galera Node Configuration
wsrep_node_address="192.168.43.204"
wsrep_node_name="m2"


COPY FROM NODE 1 to NODE 2


sudo vim /etc/mysql/debian.cnf

[client]
host     = localhost
user     = debian-sys-maint
password = 03P8rdlknkXr1upa
socket   = /var/run/mysqld/mysqld.sock
[mysql_upgrade]
host     = localhost
user     = debian-sys-maint
password = 03P8rdlknkXr1upa
socket   = /var/run/mysqld/mysqld.sock
basedir  = /usr


RUNNING ON MASTER FIRST


sudo service mysql stop
sudo service mysql start --wsrep-new-cluster


RUN IT ON SLAVE(s) if there is some servers, you need to stop first if there was runed
sudo service mysql start

TESTING CLUSTER

ON NODE 1
mysql -u root -pMyPassword -e 'CREATE DATABASE testing;'
mysql -u root -pMyPassword -e 'CREATE TABLE testing.equipment ( id INT NOT NULL AUTO_INCREMENT, type VARCHAR(50), quant INT, color VARCHAR(25), PRIMARY KEY(id));'
mysql -u root -pMyPassword -e 'INSERT INTO testing.equipment (type, quant, color) VALUES ("slide", 2, "blue")'

ON NODE 2

mysql -u root -pMyPassword -e 'SELECT * FROM testing.equipment;'
mysql -u root -pMyPassword -e 'INSERT INTO testing.equipment (type, quant, color) VALUES ("swing", 10, "yellow");'

ON NODE 1
mysql -u root -pMyPassword -e 'SELECT * FROM testing.equipment;'
+----+-------+-------+--------+
| id | type  | quant | color  |
+----+-------+-------+--------+
|  1 | slide |     2 | blue   |
|  2 | swing |    10 | yellow |
+----+-------+-------+--------+

then you can see that all of update are synced on both.


Read more ...

Feb 17, 2015

IBM Security Privileged Identity Management Workflow

Hi all,

ISPIM Workflow is very helpful for me, it can help me to do something with custom tasking.

even for complex task. As simple as what they can do, see in this video, then IBM can do this very well.





Read more ...

Feb 14, 2015

javascript move mouse

On my Previous project, its very hard to say that i must argue with others for some "dirty" concept of programming.

its about integrate Cisco Unified Communications with Oracle WebCenter to provide full feature of Cisco UCM on Oracle portal.

They said, that we just need to develop javascript client apps using Cisco UCM SDK then embed it on Oracle WebCenter. I said, its impossible, we need other "trick" to integrate it, we need JS server to serve Cisco Request then communicate it to Oracle via web services.

They just thinking about all javascripts object will run by java eval functions. They didn't realize that Cisco UCM SDK use their own COM on client.

Its about basic matter, the same with question "how to move mouse with javascript ?"


well, lets see this concept :

How Javascript Works

Javascript is script that running in browser on client side to help server apps provide any information. Its gathering something, event, object.

so when you expect javascript to do more on other lower level its Impossible. Moving cursor location is lower level than browser can do.


Javascript move mouse

Javascript move mouse

well, simple solution is,
  1. you need to create COM (on dll or ocx) to move cursor,
  2. when client start browse, it will download your COM
  3. client will requested to approve some installation
  4. register your COM on client
  5. call you COM via apps ID
other approach is possible, but its the simplest ways to do that.

Read more ...

Feb 12, 2015

glusterFS HADR file-based implementation

High Availability - Disaster  Recovery

In this writing, i want to focus in deploying some apps such as web or other apps that serve more than 10.000 time per second. Especially in telco industries, we always challenge to provide huge number availability process of apps. well, today i want to share about simple implementation about that. 

There are two parts on this terms, one is :

High Availability 

is you need to provide huge number can serve per second. You need to double your apps instance as many as you can, then put loadbalancer above it. 
Let say, you develop on RoR or struts, you can measure with apache AB, then double it as number of availability you want. 

Disaster Recovery 

is ability to survive / to always keep serving even one node is down without any gap among nodes.




GlusterFS

This is sync-ed file system, while one node change the file content, it automatically change the side node.

It make everything seems simple, because we just setup one instance, then put configuration file, database file and even file repository on one time only.

Implementation

on my lab, i just use 4 vm(s) and put special hardrive to store / as glusterFS node.

put it on each host : /etc/hosts

192.168.43.205  f1
192.168.43.206  f2


install glusterFS server on active node

sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.4
sudo apt-get update
sudo apt-get install python-software-properties glusterfs-server 



install glusterFS client on passive node 

sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.4
sudo apt-get update
sudo apt-get install python-software-properties glusterfs-client

add hard drive to vmware host


sudo su
hosts='/sys/class/scsi_host'
for i in `ls $hosts`;
do
echo "- - -" > ${hosts}/${i}/scan
done


sudo lshw -short -c disk

H/W path             Device      Class       Description
========================================================
/0/100/10/0.0.0      /dev/sda    disk        21GB SCSI Disk
/0/100/10/0.1.0      /dev/sdb    disk        21GB SCSI Disk
/0/1/0.0.0           /dev/cdrom  disk        DVD-RAM writer

sudo mkfs.xfs /dev/sdb
sudo mkdir /mnt/mypoint
sudo mount /dev/sdb /mnt/mypoint




auto mount glusterFS

Add to fstab :  /etc/fstab
/dev/sdb       /mnt/mypoint        xfs defaults    0 0



create volume


sudo gluster volume create bulkdata replica 2 transport tcp f1:/mnt/mypoint f2:/mnt/mypoint force
sudo gluster volume create bulkconfig replica 2 transport tcp f1:/var/confbrick f2:/var/confbrick force
sudo gluster volume start bulkdata
sudo gluster volume start bulkconfig
sudo gluster volume set volume1 auth.allow gluster_passive_IP_addr
(F2)




Read more ...

Feb 11, 2015

Implementation Complex IBM Tivoli Identity Management (ITIM) workflow

Implementation ISPIM in telecommunication company.


hi Guys,  today i want to share about solution approach about some cases on telecommunication of using ITIM workflow. whereas some vendors can entitle to some organizational roles.

ITIM Workflow

This software is very powerful to manage complex case because there are some extension using javascript, but sometimes you will get difficult to implement the guidance base on official website without knowing whole process inside.

some fixedpack will change some attribute on LDAP so it will change the method to set and get too.

Example :


Vendor with multiple roles

In this case, i will discuss about one vendor with multiple roles that he can choose which role will used to be approved.

Let say: vendor A, he is member of Business partner settlement management and interconnect and international roaming billing management.

On access request, he can choose which role will he use to get approval from internal user on that company.


Solution approach

well, we need one field as vendor priority role, this field will be evaluated as internal role that will approve their access request.



on workflow, we serialize 2 approval and check either the person is vendor or internal employee.


for more detail and demo, you can check on my youtube channel.

Migration approach

due to we will roll-out the new workflow for all users, we need to serialize process each user and mark them one by one. 

the marked person will use new workflow without changing the existing workflow. 


Read more ...