Networking:
We have two nodes that are reachable by below Networks:
10.1.1.0/24 : Cluster heartbeat vlan.
172.16.1.0/24 : LAN with access to the Internal LAN network and Internet.
We have set the following hostnames:
[master ~]# hostnamectl set-hostname master.sysadmin.lk [client ~]# hostnamectl set-hostname client.sysadmin.lk
We defined the Hostname and IP Address in “/etc/hosts” file(On both the nodes), as shown below;
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.1.1 master.sysadmin.lk #Physical IP Address of NODE1 172.16.1.2 client.sysadmin.lk #Physical IP Address of NODE2 10.1.1.1 NODE1 #HeartBeat IP Addr of NODE1 10.1.1.2 NODE2 #HeartBeat IP Addr of NODE2
To setting up IP Address and Hostname refer the below links.
Configure Static IP and Hostname On CentOS/RHEL7
Configure the Basic Cluster:
There is few steps to implement the Basic Cluster.
1) Install the Pacemaker configuration tools Packages.
Install the Package using yum command (Perform below command on both the nodes). Create the yum repository file with the name “ClusterHA.repo”
#vim /etc/yum.repos.d/ClusterHA.repo
[centos-7-base] name=CentOS-$releasever - Base mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os #baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/ enabled=1 gpgcheck=0
# yum install pcs pacemaker fence-agents-all psmisc policycoreutils-python
Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: centos.myfahim.com * extras: centos.myfahim.com * updates: centos.myfahim.com Resolving Dependencies --> Running transaction check ---> Package fence-agents-all.x86_64 0:4.0.11-47.el7_3.5 will be installed . . . . . . . kpartx x86_64 0.4.9-99.el7_3.3 updates 68 k libsemanage x86_64 2.5-5.1.el7_3 updates 144 k policycoreutils x86_64 2.5-11.el7_3 updates 841 k Transaction Summary ===================================================================== Install 5 Packages (+113 Dependent packages) Upgrade ( 5 Dependent packages) Total download size: 30 M Is this ok [y/d/N]: y
2) Configure the Firewalld to Allow Cluster Components on both the nodes.
Using following command, you can enable the service of high-availability on firewall.
#firewall-cmd --permanent --add-service=high-availability #firewall-cmd --reload
3) Start and enable the pcsd daemon on each node.
Perform below command on both nodes to start and enable the pcsd daemon.
# systemctl start pcsd # systemctl status pcsd ● pcsd.service - PCS GUI and remote configuration interface Loaded: loaded (/usr/lib/systemd/system/pcsd.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2017-06-14 03:21:51 EDT; 4s ago Main PID: 2909 (pcsd) CGroup: /system.slice/pcsd.service └─2909 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null & Jun 14 03:21:51 master.sysadmin.lk systemd[1]: Starting PCS GUI and remote configuration interface... Jun 14 03:21:51 master.sysadmin.lk systemd[1]: Started PCS GUI and remote configuration interface. # systemctl enable pcsd Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service. #
Note1: “enable” option is used to Activate the pcsd services at boot.
Note2: This step will be perform on both the nodes.
4) Create the Cluster.
Before creating the cluster we have to set up the authentication needed for pcs(on both the nodes), using below command.
# echo
Node1:
Sample Output:
[master ~]# echo Cluster%P@ssWD | passwd --stdin hacluster Changing password for user hacluster. passwd: all authentication tokens updated successfully. [master ~]#
Node2:
Sample Output:
[client ~]# echo Cluster%P@ssWD | passwd --stdin hacluster Changing password for user hacluster. passwd: all authentication tokens updated successfully. [client ~]# Login to any of the cluster node and authenticate “hacluster” user, using following command.
#pcs cluster auth NODE1 NODE2 -u hacluster -p Cluster%P@ssWD –force
[master ~]# pcs cluster auth NODE1 NODE2 -u hacluster -p Cluster%P@ssWD --force NODE1: Authorized NODE2: Authorized [master ~]#
Now create a cluster and populate it with some nodes.
# pcs cluster setup –force –name pacemaker1 node1 node2
[master ~]# pcs cluster setup --force --name AsteriskCluster NODE1 NODE2 Destroying cluster on nodes: NODE1, NODE2... NODE1: Stopping Cluster (pacemaker)... NODE2: Stopping Cluster (pacemaker)... NODE1: Successfully destroyed cluster NODE2: Successfully destroyed cluster Sending cluster config files to the nodes... NODE1: Succeeded NODE2: Succeeded Synchronizing pcsd certificates on nodes NODE1, NODE2... NODE1: Success NODE2: Success Restarting pcsd on the nodes in order to reload the certificates... NODE1: Success NODE2: Success [master ~]#
Note: The name of the Cluster cannot exceed 15 characters. we are use ‘AsteriskCluster’.
5) Start the Cluster on all the Nodes.
We use the “pcs” command to start the cluster.
#pcs cluster start –all
[master ~]# pcs cluster start --all NODE2: Starting Cluster... NODE1: Starting Cluster... [master ~]#
Note: if “–all” option will start the cluster on all configured nodes.
To check the cluster status following command is used;
[master ~]# pcs status Cluster name: AsteriskCluster WARNING: no stonith devices and stonith-enabled is not false Stack: corosync Current DC: NODE1 (version 1.1.15-11.el7_3.4-e174ec8) - partition with quorum Last updated: Wed Jun 14 03:44:15 2017 Last change: Wed Jun 14 03:43:28 2017 by hacluster via crmd on NODE1 2 nodes and 0 resources configured Online: [ NODE1 NODE2 ] No resources Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled [master ~]#
Note: You can also use the ‘crm_mon -1’ command to check the status of service running on Cluster.
If you check the above status its showing “no stonith devices and stonith-enabled is not false” warning. We will disable the STONITH for the time being, To disable the STONITH following command is used.
Fencing is the disconnection of a node from the cluster’s shared storage. Fencing cuts off I/O from shared storage, thus ensuring data integrity. The cluster infrastructure performs fencing through the fence daemon, fenced. In pcs based cluster, By default pacemaker enables STONITH/ Fencing in an order to protect the data.
[master ~]#pcs property show stonith-enabled
Note: Perform above command on one of the Cluster Node.
[master ~]# pcs property set stonith-enabled=false Cluster Properties: stonith-enabled: false [master ~]#
One of the Important thing, When we deploy Pacemaker is in a 2-node configuration. quorum as a concept makes no sense in this scenario because you only have it when more than half the nodes are available, so we’ll disable it too, using following command.
[master ~]# pcs property set no-quorum-policy=ignore [master ~]# pcs property show no-quorum-policy Cluster Properties: no-quorum-policy: ignore [master ~]#
Lets add the Cluster Resource.
#pcs resource create VIP ocf:heartbeat:IPaddr2 ip=172.16.1.10 cidr_netmask=32 op monitor interval=30s
Where;
“VIP” is the name the service will be known as.
“ocf:heartbeat:IPaddr2” tells heartbeat which script to use.
“op monitor interval=30s” tells Pacemaker to check the health of this service every 2 minutes by calling the agent’s monitor action.
For more Examples of pcs resource Command link here.
Now check the status of the Cluster.
[master ~]# pcs status Cluster name: AsteriskCluster Stack: corosync Current DC: NODE1 (version 1.1.15-11.el7_3.4-e174ec8) - partition with quorum Last updated: Wed Jun 14 06:21:03 2017 Last change: Wed Jun 14 06:12:58 2017 by root via crm_resource on NODE1 2 nodes and 2 resources configured Online: [ NODE1 NODE2 ] Full list of resources: VIP (ocf::heartbeat:IPaddr2): Started NODE1 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled [master ~]#
Note: We used IPaddr2 and not IPaddr because IPaddr used for manages virtual IPv4 addresses & it is a portable version and IPaddr2 used for manages virtual IPv4 addresses.