Pacemaker is a High Availability cluster Software for Linux like Operating System.Pacemaker is known as ‘Cluster Resource Manager‘, It provides maximum availability of the cluster resources by doing fail over of resources between the cluster nodes.

Pacemaker use corosync for heartbeat and internal communication among cluster components , Corosync also take care of Quorum in cluster.

In this article we will demonstrate the installation and configuration of two Node Apache (httpd) Web Server Clustering using Pacemaker on CentOS 7.

In my setup i will use two Virtual Machines and Shared Storage from Fedora Server ( two disks will be shared where one disk will be used as fencing device and other disk will used as shared storage for web server )

  • ( ) — CentOS 7.x
  • ( ) — CentOS 7.x

Step:1 Update ‘/etc/hosts’ file

Add the following lines in /etc/hosts file in both the nodes. node1 node2

Step:2 Install the Cluster and other required packages.

Use th below yum command on both the nodes to install cluster package (pcs ), fence-agents & web server (httpd)

[[email protected] ~]# yum -y update
[[email protected] ~]# yum -y install pcs fence-agents-all iscsi-initiator-utils httpd [[email protected] ~]# yum -y update
[[email protected] ~]# yum -y install pcs fence-agents-all iscsi-initiator-utils httpd

Step:3 Set the password to ‘hacluster’ user

It is recommended to use the same password of ‘hacluster’ user on both the nodes.

[[email protected] ~]# echo <new-password> | passwd --stdin hacluster
[[email protected] ~]# echo <new-password> | passwd --stdin hacluster

Step:4 Allow High Availability ports in firewall.

Use ‘firewall-cmd‘ command on both the nodes to open High Availability ports in OS firewall.

[[email protected] ~]# firewall-cmd --permanent --add-service=high-availability
[[email protected] ~]# firewall-cmd --reload
[[email protected] ~]# [[email protected] ~]# firewall-cmd --permanent --add-service=high-availability
[[email protected] ~]# firewall-cmd --reload
[[email protected] ~]#

Step:5 Start the Cluster Service and authorize nodes to join the cluster.

Lets start the cluster service on both the nodes,

[[email protected] ~]# systemctl start pcsd.service
[[email protected] ~]# systemctl enable pcsd.service
ln -s '/usr/lib/systemd/system/pcsd.service' '/etc/systemd/system/'
[[email protected] ~]# [[email protected] ~]# systemctl start pcsd.service
[[email protected] ~]# systemctl enable pcsd.service
ln -s '/usr/lib/systemd/system/pcsd.service' '/etc/systemd/system/'
[[email protected] ~]#

Use below command on either of the node to authorize the nodes to join cluster.

[[email protected] ~]# pcs cluster auth node1 node2
Username: hacluster
node1: Authorized
node2: Authorized
[[email protected] ~]#

Step:6 Create the Cluster & enable the Cluster Service

Use below pcs commands on any of the cluster nodes to create a cluster with the name ‘apachecluster‘ and node1 & node2 are the cluster nodes.

[[email protected] ~]# pcs cluster setup --start --name apachecluster node1 node2
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop pacemaker.service
Redirecting to /bin/systemctl stop corosync.service
Killing any remaining services...
Removing all cluster configuration files...
node1: Succeeded
node2: Succeeded
Starting cluster on nodes: node1, node2...
node2: Starting Cluster...
node1: Starting Cluster...
Synchronizing pcsd certificates on nodes node1, node2...
node1: Success
node2: Success
Restaring pcsd on the nodes in order to reload the certificates...
node1: Success
node2: Success
[[email protected] ~]#

Enable the Cluster Service using below pcs command :

[[email protected] ~]# pcs cluster enable --all
node1: Cluster Enabled
node2: Cluster Enabled
[[email protected] ~]#

Now Verify the cluster Service

[[email protected] ~]# pcs cluster status


Step:7 Setup iscsi shared Storage on Fedora Server for both the nodes.

IP address of Fedora 23 Server =

Install the required package first.

[[email protected] ~]# dnf -y install targetcli

I have a new disk (/dev/sdb) of size 11 GB on my fedora server on which i have created two LV one for Fecing and other is for Apache file system.

[[email protected] ~]# pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created
[[email protected] ~]# vgcreate cluster_data /dev/sdb Volume group "cluster_data" successfully created
[[email protected] ~]# lvcreate -L 1G -n fence_storage cluster_data Logical volume "fence_storage" created.
[[email protected] ~]# lvcreate -L 10G -n apache_storage cluster_data Logical volume "apache_storage" created.
[[email protected] ~]#

Get the Initiator Names of the both nodes.

[[email protected] ~]# cat /etc/iscsi/initiatorname.iscsi
[[email protected] ~]# [[email protected] ~]# cat /etc/iscsi/initiatorname.iscsi
[[email protected] ~]#

Now use ‘targetcli‘ command to configure iscsi storage for both nodes.

[[email protected] ~]# targetcli
/> cd /backstores/block
/backstores/block> create apache-fs /dev/cluster_data/apache_storage /backstores/block> create fence-storage /dev/cluster_data/fence_storage /backstores/block> cd /iscsi
/iscsi> create
/iscsi> cd
/iscsi/iqn.20...9c6/tpg1/luns> create /backstores/block/apache-fs
/iscsi/iqn.20...9c6/tpg1/luns> create /backstores/block/fence-storage
/iscsi/iqn.20...9c6/tpg1/luns> cd ../acls
/iscsi/iqn.20...9c6/tpg1/acls> create
/iscsi/iqn.20...9c6/tpg1/acls> create
/iscsi/iqn.20...9c6/tpg1/acls> cd /
/> saveconfig /> exit


Start & enable the target service

[[email protected] ~]# systemctl start target.service
[[email protected] ~]# systemctl enable target.service
[[email protected] ~]#

Open the iscsi ports in the OS Firewall.

[[email protected] ~]# firewall-cmd --permanent --add-port=3260/tcp
[[email protected] ~]# firewall-cmd --reload
[[email protected] ~]#

Now Scan the iscsi storage on both the nodes :

Run below commands on both the nodes

# iscsiadm --mode discovery --type sendtargets --portal
# iscsiadm -m node -T -l -p

Replace the target ‘iqn’ and ‘ip address’ as per your setup. After executing above command we can see two new disk in ‘fdisk -l’ command output.


List the ids of newly scan iscsi disk.

[[email protected] ~]# ls -l /dev/disk/by-id/
total 0
lrwxrwxrwx. 1 root root 9 Feb 21 03:22 wwn-0x60014056e8763c571974ec3b78812777 -> ../../sdb
lrwxrwxrwx. 1 root root 9 Feb 21 03:22 wwn-0x6001405ce01173dcd7c4c0da10051405 -> ../../sdc
[[email protected] ~]#

Start and enable iscsi service on both the nodes.

[[email protected] ~]# systemctl start iscsi.service
[[email protected] ~]# systemctl enable iscsi.service
[[email protected] ~]# systemctl enable iscsid.service
ln -s '/usr/lib/systemd/system/iscsid.service' '/etc/systemd/system/'
[[email protected] ~]# [[email protected] ~]# systemctl start iscsi.service
[[email protected] ~]# systemctl enable iscsi.service
[[email protected] ~]# systemctl enable iscsid.service
ln -s '/usr/lib/systemd/system/iscsid.service' '/etc/systemd/system/'
[[email protected] ~]#

Step:8 Create the Cluster Resources.

Define stonith (Shoot The Other Node In The Head) fencing device for the cluster. It is a method to isolate the node from cluster when node become unresponsive.

I am using 1 GB iscsi storage (/dev/sdc ) for fencing.

Run the following commands on either of the node :

[[email protected] ~]# pcs stonith create scsi_fecing_device fence_scsi pcmk_host_list="node1 node2" pcmk_monitor_action="metadata" pcmk_reboot_action="off" devices="/dev/disk/by-id/wwn-0x6001405ce01173dcd7c4c0da10051405" meta provides="unfencing"
[[email protected] ~]# [[email protected] ~]# pcs stonith show scsi_fecing_device (stonith:fence_scsi): Started node1
[[email protected] ~]#

Now Create a partition on second iscsi storage (/dev/sdb) that will be used as document root for our web server.

[[email protected] ~]# fdisk /dev/disk/by-id/wwn-0x60014056e8763c571974ec3b78812777


Format the newly created partition :

[[email protected] ~]# mkfs.ext4 /dev/disk/by-id/wwn-0x60014056e8763c571974ec3b78812777-part1

Mount the new file system temporary on /var/www and create sub-folders and set the selinux rule.

[[email protected] html]# mount /dev/disk/by-id/wwn-0x60014056e8763c571974ec3b78812777-part1 /var/www/
[[email protected] html]# mkdir /var/www/html
[[email protected] html]# mkdir /var/www/cgi-bin
[[email protected] html]# mkdir /var/www/error
[[email protected] html]# restorecon -R /var/www
[[email protected] html]# echo "Apache Web Sever Pacemaker Cluster" > /var/www/html/index.html

Umount the file system now because cluster will mount the file system when required.

[[email protected] html]# umount /var/www/
[[email protected] html]#

Create the Web Server file system Cluster Resource using below pcs command.

[[email protected] html]# pcs resource create webserver_fs Filesystem device="/dev/disk/by-id/wwn-0x60014056e8763c571974ec3b78812777-part1" directory="/var/www" fstype="ext4" --group webgroup
[[email protected] html]# [[email protected] html]# pcs resource show Resource Group: webgroup webserver_fs (ocf::heartbeat:Filesystem): Started node1
[[email protected] html]#

Add the following lines in ‘/etc/httpd/conf/httpd.conf’ file on both the nodes.

<Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from

Open the httpd or web server port in OS firewall on both the nodes

[[email protected] ~]# firewall-cmd --permanent --add-service=http
[[email protected] ~]# firewall-cmd --reload
success [[email protected] ~]#
[[email protected] ~]# firewall-cmd --permanent --add-service=http
[[email protected] ~]# firewall-cmd --reload
[[email protected] ~]#

Create Virtual IP (IPaddr2) Cluster Resource using below command. Execute the following command on any of the node.

[[email protected] ~]# pcs resource create vip_res IPaddr2 ip= cidr_netmask=24 --group webgroup
[[email protected] ~]#

Create Apache Cluster Resource using below Command :

[[email protected] ~]# pcs resource create apache_res apache configfile="/etc/httpd/conf/httpd.conf" statusurl="" --group webgroup
[[email protected] ~]#

Verify the Cluster Status :

[[email protected] ~]# pcs status


Use ‘df‘ and ‘ip add‘ command to verify the file system and ip address failover.

Access your Website using VIP (


Pacemaker GUI :

GUI of Pacemaker can be accessed from web browser using vip.


Use the user name ‘hacluster’ and its password that we set in above step.

Add the existing Cluster nodes.




Installation and Configuration of Pacemaker is completed Now, Hope you have enjoy the steps. Please share your valuable feedback & comments ?

Reference :