DRBD Installation & Configuration in UBUNTU
We will start our DRBD installation from where the Heartbeat finished.
We can install it at individual level. For now, a quick review of what we have:
Two (2) nodes of UBUNTU
Both has different static IP address (192.168.1.151 192.168.1.152)
Different Hostnames (node1.hatest.com, node2.hatest.com)
Installed Heartbeat with alias IP (192.168.1.150)
Both nodes have Secondary Hard Disks with space of 15 GB.
(Specially required for DRBD. We can also use single unused partition to install & configure DRBD)
Let’s start. First install drbd package on both nodes.
root@node1:/home/sharif # apt install drbd8-utils
After installation, we can check the DRBD status. We will notice as below:
root@node1:/home/sharif# /etc/init.d/drbd status
● drbd.service - LSB: Control drbd resources.
Loaded: loaded (/etc/init.d/drbd; bad; vendor preset: enabled)
Active: inactive (dead)
Docs: man:systemd-sysv-generator(8)
Restart the DRBD service.
root@node1:/home/sharif# /etc/init.d/drbd stop
[ ok ] Stopping drbd (via systemctl): drbd.service.
root@node1:/home/sharif# /etc/init.d/drbd start
[ ok ] Starting drbd (via systemctl): drbd.service.
Now, again check the status.
root@node1:/home/sharif# /etc/init.d/drbd status
● drbd.service - LSB: Control drbd resources.
Loaded: loaded (/etc/init.d/drbd; bad; vendor preset: enabled)
Active: active (exited) since Thu 2018-06-14 18:22:55 +06; 17s ago
Docs: man:systemd-sysv-generator(8)
Process: 33089 ExecStart=/etc/init.d/drbd start (code=exited, status=0/SUCCESS)
Jun 14 18:22:55 node1.hatest.com systemd[1]: Starting LSB: Control drbd resources....
Jun 14 18:22:55 node1.hatest.com drbd[33089]: * Starting DRBD resources
Jun 14 18:22:55 node1.hatest.com drbd[33089]: no resources defined!
Jun 14 18:22:55 node1.hatest.com drbd[33089]: no resources defined!
Jun 14 18:22:55 node1.hatest.com drbd[33089]: WARN: stdin/stdout is not a TTY; using /dev/consoleWARN: stdin/stdout is not a TTY; using /dev/consoleno res...s defined!
Jun 14 18:22:55 node1.hatest.com drbd[33089]: ...done.
Jun 14 18:22:55 node1.hatest.com systemd[1]: Started LSB: Control drbd resources..
Hint: Some lines were ellipsized, use -l to show in full.
We can check the running DRBD version status also.
root@node1:/home/sharif# apt list –installed | grep drbd
OR
root@node1:/home/sharif# cat /proc/drbd
To upgrade DRBD 8 to DRBD 9, we will follow below process. We can upgrade it now or later some other times.
root@node1:/home/sharif# add-apt-repository ppa:linbit/linbit-drbd9-stack
This ppa contains DRBD9, drbd-utils, DRBD Manage, and drbdmanage-docker-volume.
This differs from official, production grade LINBIT repositories in several ways, including:
- We push RCs immediately to the PPA
- We don't push hotfixes, these usually have to wait until the next RC/release
- We only keep 2 LTS versions up to date (xenial and bionic, but not trusty)
For support and access to official repositories see:
https://www.linbit.com or write an email to: sales AT linbit.com
More info: https://launchpad.net/~linbit/+archive/ubuntu/linbit-drbd9-stack
Press [ENTER] to continue or ctrl-c to cancel adding it
gpg: keyring `/tmp/tmpq6ifinsz/secring.gpg' created
gpg: keyring `/tmp/tmpq6ifinsz/pubring.gpg' created
gpg: requesting key CEAA9512 from hkp server keyserver.ubuntu.com
gpg: /tmp/tmpq6ifinsz/trustdb.gpg: trustdb created
gpg: key CEAA9512: public key "Launchpad PPA for LINBIT" imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
OK
root@node1:/home/sharif# apt update
root@node1:/home/sharif# apt upgrade
Check the DRBD version status again.
root@node1:/home/sharif# apt list –installed | grep drbd
OR
root@node1:/home/sharif# cat /proc/drbd
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
drbd-utils/xenial,now 9.4.0-1ppa1~xenial1 amd64 [installed,automatic]
drbd8-utils/xenial,now 2:9.4.0-1ppa1~xenial1 amd64 [installed]
We will get three components after the installation.
/etc/drbd.conf
/etc/drbd.d/
global_common.conf
We can edit drbd.conf file or we can add another file at /etc/drbd.d/ directory in the name of
anything with extension of .res (i.e *.res, e.g xyz.res). [on both nodes]
Example configuration file
root@node1:/home/sharif# vim /etc/drbd.conf
# You can find an example in /usr/share/doc/drbd.../drbd.conf.example
include "drbd.d/global_common.conf";
include "drbd.d/*.res";
resource disk1
{
startup {
wfc-timeout 30;
outdated-wfc-timeout 20;
degr-wfc-timeout 30;
}
net {
cram-hmac-alg sha1;
shared-secret sync_disk;
}
syncer {
rate 200M;
verify-alg sha1;
}
on node1.hatest.com { # Node1 defined
device /dev/drbd0;
disk /dev/sdb; # Device to use with DRBD
address 192.168.1.151:7789; # IP Address and port of Node1
meta-disk internal;
}
on node2.hatest.com { # Node2 defined
device /dev/drbd0;
disk /dev/sdb; # Device to use with DRBD
address 192.168.1.152:7789; # IP Address and port of Node2
meta-disk internal;
}
}
Note: copy from “resource disk1 to }” lines into /etc/drbd.conf as it is shown in the example.
Also do not forget to change both node’s “hostname”, ‘’IP address” & if necessary “disk name”.
Restart the drbd service.
Create drbd disk on both nodes.
root@node1:/etc# drbdadm create-md disk1
--== Thank you for participating in the global usage survey ==--
The server's response is:
you are the 1007th user to install this version
initializing activity log
initializing bitmap (480 KB) to all zero
Writing meta data...
New drbd meta data block successfully created.
Success
Restart the drbd service.
Now DRBD service is up and running both nodes are connected but they will not sync and will remain inconsistent.
Because they are yet not decided which one is the primary which one is the secondary. As link below:
root@node1:/etc# cat /proc/drbd
version: 8.4.5 (api:1/proto:86-101)
srcversion: 4B3E2E2CD48CAE5280B5205
0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:15728124
So we will do following things to resolve the issue. (on Node1 only)
root@node1:/etc# drbdadm -- --overwrite-data-of-peer primary all
Now, we declared the Node1 as primary node and all other nodes will remain/became secondary node.
Now check the status.
root@node1:/etc# cat /proc/drbd
version: 8.4.5 (api:1/proto:86-101)
srcversion: 4B3E2E2CD48CAE5280B5205
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
ns:164892 nr:0 dw:0 dr:170064 al:0 bm:0 lo:0 pe:2 ua:3 ap:0 ep:1
wo:f oos:15564284
[>....................] sync'ed: 1.1% (15196/15356)M
finish: 0:06:19 speed: 40,960 (40,960) K/sec
Here, we can see local node(Node1) became primary node. It UpToDate. But remote node (Node2) became
secondary node and inconsistent as it not fully synced yet. It is syncing hence CS(Connection state)
shows SyncSource. Also we can see the progress bar.
After fully syncing, the status will be,
root@node1:/etc# cat /proc/drbd
version: 8.4.5 (api:1/proto:86-101)
srcversion: 4B3E2E2CD48CAE5280B5205
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:15728124 nr:0 dw:0 dr:15730252 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1
wo:f oos:0
Now we will format the drbd storage with a file system. As we normally do, whenever we create a partition.
So that we can access it for our use.
root@node1:/etc# mkfs.ext4 /dev/drbd0 [Only at Node1]
mke2fs 1.42.13 (17-May-2015)
Creating filesystem with 3932031 4k blocks and 983040 inodes
Filesystem UUID: a92d7e7f-75d5-47d3-a366-ef35ed5f6de5
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Now we will create a directory where we will mount the drbd disk.
root@node1:/etc# mkdir /drbd [On both Node1 & Node2]
root@node1:/etc# mount /dev/drbd0 /drbd/ [Only Node1]
Now, we can check the disk status.
root@node1:/etc# df -hT
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 468M 0 468M 0% /dev
tmpfs tmpfs 98M 5.8M 92M 6% /run
/dev/mapper/nagios--vg-root ext4 8.3G 2.2G 5.7G 28% /
tmpfs tmpfs 488M 54M 434M 11% /dev/shm
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs tmpfs 488M 0 488M 0% /sys/fs/cgroup
/dev/sda1 ext2 472M 154M 294M 35% /boot
tmpfs tmpfs 98M 0 98M 0% /run/user/1000
/dev/drbd0 ext4 15G 38M 14G 1% /drbd
root@node1:/etc#
Now we will create some files at the DRBD disk.
root@node1:/etc# touch /drbd/test{1,2,3,4,5}
root@node1:/etc#
root@node1:/etc# ls /drbd/
lost+found test1 test2 test3 test4 test5
To check DRBD storage at Node2, we need to follow below steps. We will perform this task on Node1.
We will make Node2 primary and we will check previously created files.
Unmount the DRBD disk.
root@node1:/etc# umount /dev/drbd0
Declare this node as secondary
root@node1:/etc# drbdadm secondary disk1
Declare Node2 as primary node and mount it.
root@node2:/etc/ha.d# drbdadm primary disk1
root@node2:/etc/ha.d# mount /dev/drbd0 /drbd/
root@node2:/etc/ha.d#
root@node2:/etc/ha.d# ls /drbd/
lost+found test1 test2 test3 test4 test5
root@node2:/etc/ha.d#
See, we got those files at Node2 also. So the data replications are working, I guess.
Check the DRBD status at Node2.
root@node2:/etc/ha.d# cat /proc/drbd
version: 8.4.5 (api:1/proto:86-101)
srcversion: 4B3E2E2CD48CAE5280B5205
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:4 nr:16109264 dw:16109268 dr:1729 al:1 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
root@node2:/etc/ha.d# drbd-overview
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.
0:disk1/0 Connected Primary/Secondary UpToDate/UpToDate /drbd ext4 15G 38M 14G 1%
Now again make the Node1 primary and check the status.
Unmount /dev/drbd0 at Node2
root@node2:/etc/ha.d# umount /dev/drbd0
Declare the Node2 as Secondary.
root@node2:/etc/ha.d# drbdadm secondary disk1
Declare the Node1 as primary.
root@node1:/etc# drbdadm primary disk1
Mount the DRBD disk at Node1.
root@node1:/etc# mount /dev/drbd0 /drbd/
Check the status at Node1 & Node2.
root@node1:/etc# cat /proc/drbd
version: 8.4.5 (api:1/proto:86-101)
srcversion: 4B3E2E2CD48CAE5280B5205
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:16109268 nr:8 dw:381152 dr:15734202 al:76 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
Incase split brain situation occurs in DRBD configured nodes. Try below steps to resolve.
On faulty node / The node, which has less data. Cause after below steps, all data of this node
will be erased and again sync with the another good/large data contained node. So be careful hare.
root@node1:/home/sharif#drbdadm secondary all
root@node1:/home/sharif#drbdadm -- --discard-my-data connect all
If you unable to perform those and getting resource busy/disk is using by someone else notification,
try to stop all Zimbra services of this node. Then follow the previous steps.
After previous step, perform below steps on the good/large data container node.
root@node1:/home/sharif#drbdadm primary all
root@node1:/home/sharif#drbdadm connect all
Now check the DRBD nodes status on both nodes.
Comments
Post a Comment