logoVMWare Cent OS Fail Over Pair With A Floating Volume

Contents

[hide]

CentOS 5 Build Document for IE-Traf-DB2 and IE-Traf-DB3 on VMWare

The following document is used to build a Traffic Monitoring server Cluster on VMware using the following.

  • CentOS 5 Template Image

Requirements

Server required to perform traffic monitoring for the iHouse Elite software product.

This build requires the following hardware requirements:

  • Located on Production VLAN (44)
  • Heartbeat on ESX Internal VLAN (77)
  • 512 MB RAM
  • Large volume with expansion capabilities to store the MySQL Database.

This build requires the following software packages:

  • MySQL Server 5.0
  • Heartbeat

Prepare the CentOS5 Template

VMware infrastructure client

In the VMware infrastructure client, select "Virtual machines and templates". Select Verizon Colo > Templates > Base. Then right click on CentoOS 5 Template and Deploy from Virtual Machine. Setup and configure production deployment and follow the necessary steps to deploy VMWare server.  Repeat this step to deploy a second server, IE-TRAF-DB3.

Both IE-TRAF-DB2 and IE-TRAF-DB3 servers will have one back facing interface and one heartbeat interface, however, we want to set the IP address of each NIC before we bring these interface up. Configure the VM to use VLAN44 for the "internal" interface, VLAN77 for the heartbeat interface, and uncheck the checkbox indicating the ability to bring up the interface. 

Add The "Floating" Volume

Add a volume in the SAN-EQL manager:

  • Use Thin Provisioning
  • Make it a Shared Volume
  • 30 GB

Attach the volume to one of the guests:

1. Rescan the iSCSI connections on each ESX host. 2. Edit Settings for that guest. 3. Select Add. 4. Select Hard Disk.

   a.  Use Raw Device Mappings.
   b.  Select Target LUN - find your disk.
   c.  Specify Datastore - Select the datastore to store this disk/file.
   d.  Select Physical.

5. Remove the disk you have just added.

Attach the volume to the other guest:


1. Edit Settings for that guest. 2. Select Add. 3. Select Hard Disk.

   a.  Use Raw Device Mappings.
   b.  Select Target LUN - find your disk.
   c.  Specify Datastore - Select the datastore to store this disk/file.
   d.  Select Physical.

4. On that guest:

fdisk /dev/sdb
N,1,R,R,wq
mkfs.ext3 /dev/sdb1

Back to the first guest:

1. Edit Settings for that guest. 2. Select Add. 3. Select Hard Disk.

   a.  Use an Existing Virtual Disk.
   b.  Browse the datastore that you have stored this disk information; it will be under the guest's hostname directory has a label ending in 1.

On Both Guests:

Add the volume to fstab:

nano -w /etc/fstab
/dev/sdb1		/var/lib/mysql		ext3	noauto		0 0

Go ahead and power on the server, then enter the console of that VM, since there will not be any network connectivity, initially.

Configure the Servers

1. Login to each server as root and change the network file accordingly:

nano -w /etc/sysconfig/network

2. Change the IP addresses and settings for the first interface:

nano -w /etc/sysconfig/network-scripts/ifcfg-eth0

For IE-TRAF-DB2's ifcfg-eth0:

DEVICE=eth0
BOOTPROTO=none
BROADCAST=192.168.44.255
IPADDR=192.168.44.196
NETMASK=255.255.255.0
NETWORK=192.168.44.0
ONBOOT=yes
GATEWAY=192.168.44.1
TYPE=Ethernet

For IE-TRAF-DB3's ifcfg-eth0:

DEVICE=eth0
BOOTPROTO=none
BROADCAST=192.168.44.255
IPADDR=192.168.44.197
NETMASK=255.255.255.0
NETWORK=192.168.44.0
ONBOOT=yes
GATEWAY=192.168.44.1
TYPE=Ethernet

3. Copy and change the IP address and settings for the second interface:

cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth1
nano -w /etc/sysconfig/network-scripts/ifcfg-eth1

For IE-TRAF-DB2's eth1:

DEVICE=eth1
BOOTPROTO=none
BROADCAST=192.168.77.207
IPADDR=192.168.77.205
NETMASK=255.255.255.252
NETWORK=192.168.77.204
ONBOOT=yes
TYPE=Ethernet

For IE-TRAF-DB3's eth1:

DEVICE=eth1
BOOTPROTO=none
BROADCAST=192.168.77.207
IPADDR=192.168.77.206
NETMASK=255.255.255.252
NETWORK=192.168.77.204
ONBOOT=yes
TYPE=Ethernet

4. Enable necessary services:

chkconfig --level3 ypbind on
chkconfig --level3 sshd on

5. Enable user /net and /backup directories in fstab:

nano -w /etc/fstab

6. Update hostname in hosts file for 127.0.0.1

nano -w /etc/hosts

7. Verify nameservers:

nano -w /etc/resolv.conf

8. Shutdown Server:

shutdown -h now

Go back into VMware infrastructure client

Find the VM's and edit the configuration on those guests. Under Summary select Edit Settings and check the checkbox to enable both NICs at startup. Power on the server.

Install Necessary Software

Log into the VM via SSH and sudo up to root on each server.

1. Make sure that MySQL is installed and will startup:

yum list | grep mysql  (look for Installed)
yum install mysql-server.i386

2. Install Heartbeat:

yum list | grep heartbeat
yum install heartbeat.i386

Configure Software

MySQL

Now that the software is installed, configure MySQL:

mount /dev/sdb1 /var/lib/mysql
mysql_install_db
chown -R mysql:mysql /var/lib/mysql/
umount /dev/sdb1 /var/lib/mysql

nano -w /etc/my.cnf

ADD (to [mysqld]):

bind-address    = 192.168.44.195

Heartbeat

...and for the HA software:

1. Make the ha.cf file:

nano -w /etc/ha.d/ha.cf

debugfile /var/log/ha.debug
logfile /var/log/ha.log
logfacility     local0
bcast eth1
keepalive 200ms
warntime 5
deadtime 10
initdead 30
udpport 694
auto_failback off
node ie-traf-db2.internal.cisdata.net
node ie-traf-db3.internal.cisdata.net
respawn hacluster /usr/lib/heartbeat/ccm
respawn hacluster /usr/lib/heartbeat/ipfail
ping 192.168.44.1

2. Make the haresources file:

nano -w /etc/ha.d/haresources

mysqlha01.internal.cisdata.net  IPaddr::192.168.44.195 Filesystem::/dev/sdb::/var/lib/mysql::ext3::defaults mysqld

3. Make the authkeys file:

nano -w /etc/ha.d/authkeys

auth 1
1 crc

4. Scp the three configuration files just made to the other server:

scp /etc/ha.d/ha.cf root@ie-traf-db3:/etc/ha.d/ha.cf
scp /etc/ha.d/haresources root@ie-traf-db3:/etc/ha.d/haresources
scp /etc/ha.d/authkeys root@ie-traf-db3:/etc/ha.d/authkeys

5. Make symlink to mysqld in init.d:

ln -sf /etc/init.d/mysqld /etc/ha.d/resource.d/mysqld

Testing

Now let's see if things are working correctly here:

Test Failover

To test the failover, simply do one of the following:

1. Restart Guest 2. Restart Heartbeat Deamon

/etc/init.d/heartbeat restart

3. Issue a "standby" command to the heartbeat deamon

/usr/lib/heartbeat/hb_standby

To take over the other node:

/usr/lib/heartbeat/hb_takeover

There are two HA logfiles, /var/log/ha.log and /var/log/ha.debug. The debug file is more vocal.

Test Database

To test the MySQL Database, you must first create a table, define databse, fill in some data, and grant access to a remote user. Then from the remote side, log into that database and query data from it while you test the HA failover with your favorite method from above.

From Active MySQL Server in Cluster:

1. Create Database:

mysql
create database employees;

2. Create table:

mysql
use employees
CREATE TABLE employee_data
(
emp_id int unsigned not null auto_increment primary key,
f_name varchar(20),
l_name varchar(20),
title varchar(30),
age int,
yos int,
salary int,
perks int,
email varchar(60)
);

DESCRIBE employee_data;

3. Insert data into table:

mysql
INSERT INTO employee_data
(f_name, l_name, title, age, yos, salary, perks, email)
values
("Manish", "Sharma", "CEO", 28, 4, 200000, 
50000, "manish@bignet.com");

4. Grant Access to Remote User:

mysql
GRANT ALL ON employees.* TO root@'192.168.45.243' IDENTIFIED BY '';

On Remote Client:

1.  Connect to database:
<pre>mysql -u root -h 192.168.44.195
use employees;

2. Issue query to database:

SELECT f_name, l_name from employee_data;

Keep hitting up arrow to issue this query.

3. Issue HA Failover using commands above.