Software Raid on UBUNTU

 

Provide file system to RAID | Create Mount point | Mount RAID | check

root@raid:/dev# mkfs.ext4 -F /dev/md0

mke2fs 1.42.13 (17-May-2015)

Creating filesystem with 523520 4k blocks and 131072 inodes

Filesystem UUID: ce503c86-aa4f-470c-b39c-f5e1002cf80e

Superblock backups stored on blocks:

        32768, 98304, 163840, 229376, 294912


Allocating group tables: done

Writing inode tables: done

Creating journal (8192 blocks): done

Writing superblocks and filesystem accounting information: done


root@raid:/# mkdir RAID

root@raid:/RAID# pwd

/RAID


root@raid:/# mount /dev/md0 /RAID

root@raid:/#df -hT

Filesystem     Type      Size  Used Avail Use% Mounted on

udev           devtmpfs  468M     0  468M   0% /dev

tmpfs          tmpfs      98M  5.9M   92M   6% /run

/dev/sda1      ext4      8.8G  1.6G  6.7G  20% /

tmpfs          tmpfs     488M     0  488M   0% /dev/shm

tmpfs          tmpfs     5.0M     0  5.0M   0% /run/lock

tmpfs          tmpfs     488M     0  488M   0% /sys/fs/cgroup

tmpfs          tmpfs      98M     0   98M   0% /run/user/1000

/dev/md0       ext4      2.0G  3.0M  1.9G   1% /RAID





Check current RAID status

root@raid:/dev# mdadm --detail /dev/md0

/dev/md0:

        Version : 1.2

  Creation Time : Mon Feb 26 07:40:50 2018

     Raid Level : raid1

     Array Size : 2094080 (2045.34 MiB 2144.34 MB)

  Used Dev Size : 2094080 (2045.34 MiB 2144.34 MB)

   Raid Devices : 2

  Total Devices : 3

    Persistence : Superblock is persistent


    Update Time : Mon Feb 26 07:49:37 2018

          State : clean

 Active Devices : 2

Working Devices : 3

 Failed Devices : 0

  Spare Devices : 1


           Name : raid:0  (local to host raid)

           UUID : 6371541d:47ecb2cf:549b6986:69d90f65

         Events : 19


    Number   Major   Minor   RaidDevice State

       0       8       17        0      active sync   /dev/sdb1

       1       8       33        1      active sync   /dev/sdc1


       2       8       49        -      spare   /dev/sdd1





root@raid:/home/sharif# cat /proc/mdstat

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]

md0 : active raid1 sdb1[0] sdd1[2](S) sdc1[1]

      2094080 blocks super 1.2 [2/2] [UU]


unused devices: <none>





Fail one of RAID device

root@raid:/dev# mdadm --manage --fail /dev/md0 /dev/sdc1

mdadm: set /dev/sdc1 faulty in /dev/md0


[Now /dev/sdd1 should get in to action and became active raid one partner with /dev/sdb1

Let’s check below]








Again check RAID status

root@raid:/dev# mdadm --detail /dev/md0

/dev/md0:

        Version : 1.2

  Creation Time : Mon Feb 26 07:40:50 2018

     Raid Level : raid1

     Array Size : 2094080 (2045.34 MiB 2144.34 MB)

  Used Dev Size : 2094080 (2045.34 MiB 2144.34 MB)

   Raid Devices : 2

  Total Devices : 3

    Persistence : Superblock is persistent


    Update Time : Mon Feb 26 08:16:37 2018

          State : clean, degraded, recovering

 Active Devices : 1

Working Devices : 2

 Failed Devices : 1

  Spare Devices : 1


 Rebuild Status : 81% complete


           Name : raid:0  (local to host raid)

           UUID : 6371541d:47ecb2cf:549b6986:69d90f65

         Events : 33


    Number   Major   Minor   RaidDevice State

       0                 8        17          0                    active sync   /dev/sdb1

       2                 8        49          1                    spare  rebuilding   /dev/sdd1    


       1       8       33        -      faulty   /dev/sdc1


[After few moments later the status will became active and sync from sapre to rebuilding]



Remove the faulty drive and again add/reconnect another one and check status again

root@raid:/dev# mdadm /dev/md0 -r /dev/sdc1

mdadm: hot removed /dev/sdc1 from /dev/md0


root@raid:/dev# mdadm /dev/md0 -a /dev/sdc1

mdadm: added /dev/sdc1




root@raid:/dev# mdadm --detail /dev/md0

/dev/md0:

        Version : 1.2

  Creation Time : Mon Feb 26 07:40:50 2018

     Raid Level : raid1

     Array Size : 2094080 (2045.34 MiB 2144.34 MB)

  Used Dev Size : 2094080 (2045.34 MiB 2144.34 MB)

   Raid Devices : 2

  Total Devices : 3

    Persistence : Superblock is persistent


    Update Time : Mon Feb 26 08:26:02 2018

          State : clean

 Active Devices : 2

Working Devices : 3

 Failed Devices : 0

  Spare Devices : 1


           Name : raid:0  (local to host raid)

           UUID : 6371541d:47ecb2cf:549b6986:69d90f65

         Events : 40


    Number   Major   Minor   RaidDevice State

       0       8       17        0      active sync   /dev/sdb1

       2       8       49        1      active sync   /dev/sdd1


       3       8       33        -      spare   /dev/sdc1





Reference

  1. https://help.ubuntu.com/community/Installation/SoftwareRAID

  2. http://linux-sys-adm.com/how-to-install-and-activate-raid-1-ubuntu-server-14.04-lts-step-by-step-by-step/

  3. https://www.digitalocean.com/community/tutorials/how-to-create-raid-arrays-with-mdadm-on-ubuntu-16-04

  4. http://www.linuceum.com/Server/srvRAIDTest.php

  5. https://www.kevinhooke.com/2011/05/07/check-raid-status-on-ubuntu/

  6. https://drive.google.com/open?id=1LTgaBbqrz-9zA_EqufP9zDDy2GXnDETP

Comments

Popular posts from this blog

Disabling Zimbra's AntiSpam, Amavis and AntiVirus filtering

Cambium cnPilot E400/E410/E500 Configuration Tutorial

Error "Unable to retrive Zimbra GPG key for package validation"