This article contains additional information on commands used in breaking, repairing, and monitoring RAID arrays on 85xx/88xx Branch Repeater appliances.
Description
This command displays the current state of all RAID arrays defined on the Branch Repeater appliances.
Command
cat /proc/mdstat
Example Output in a Normal Condition
Personalities : [raid1] md2 : active raid1 sdb2[1] sda2[0] 40957632 blocks [2/2] [UU] md1 : active raid1 sdb3[1] sda3[0] 8385856 blocks [2/2] [UU] md3 : active raid1 sdb5[1] sda5[0] 194563072 blocks [2/2] [UU] md0 : active raid1 sdb1[1] sda1[0] 104320 blocks [2/2] [UU] unused devices: <none>
Example Output in a Failed Condition
Personalities : [raid1] md2 : active raid1 sdb2[1] sda2[2](F) 40957632 blocks [2/1] [_U] md1 : active raid1 sdb3[1] 8385856 blocks [2/1] [_U] md3 : active raid1 sdb5[1] sda5[2](F) 194563072 blocks [2/1] [_U] md0 : active raid1 sdb1[1] sda1[0](F) 104320 blocks [2/1] [_U] unused devices: <none>
Notes
Note the following lines in the preceding output:
Description
This command provides detailed information on an array defined on the Branch Repeater appliance. It shows the current state of the array including the disk partitions that are a part of the array. If clean is displayed as the state, the array does not require to be fixed. If clean, degraded is displayed as the state, the array requires to be fixed.
Syntax
mdadm -Q -D /dev/md[0-n]
Example
mdadm -Q -D /dev/md0
Example Output
[root@hostname ~]# mdadm -Q -D /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Fri Nov 16 02:06:28 2007 Raid Level : raid1 Array Size : 104320 (101.88 MiB 106.82 MB) Device Size : 104320 (101.88 MiB 106.82 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Sep 12 14:31:09 2011 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 UUID : 5613704f:c2453154:62dc1086:12b719c8 Events : 0.28
Special Notes on Drive State/Status
Description
This command enables to force a RAID failure on a specific array. This is most useful when working in a lab environment, and exprementing to understand the functionality of various commands.
Syntax
mdadm /dev/md[0-n] -f /dev/sda[1-n]
Example
mdadm /dev/md0 –f /dev/sda1
Example Output
The output displays a forced failure of the /dev/sda1 partition in the /dev/md0 array. Following is the sample output.
[root@hostname ~]# mdadm /dev/md0 -f /dev/sda1 mdadm: set /dev/sda1 faulty in /dev/md0 [root@hostname ~]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 sdb2[1] sda2[0] 40957632 blocks [2/2] [UU] md1 : active raid1 sdb3[1] sda3[0] 8385856 blocks [2/2] [UU] md3 : active raid1 sdb5[1] sda5[0] 194563072 blocks [2/2] [UU] md0 : active raid1 sda1[2](F) sdb1[1] 104320 blocks [2/1] [_U] unused devices: <none>
Notes
Note the following lines in the preceding output:
Removing a Disk Partition from degraded RAID Array
Description
This command enables to remove a failed device from a specific array. It is useful if one of the disk partitions is in a failed condition. In most cases, the command must be issued before adding the failed disk partition back into a degraded array.
Syntax
mdadm /dev/md[0-n] -r /dev/sda[1-n]
Example
mdadm /dev/md0 –r /dev/sda1
Example Output:
The output displays the removal of the failed partition /dev/sda1 from array /dev/md0). Following is the sample output.
[root@hostname ~]# mdadm /dev/md0 -r /dev/sda1 mdadm: hot removed /dev/sda1 [root@hostname ~]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 sdb2[1] sda2[0] 40957632 blocks [2/2] [UU] md1 : active raid1 sdb3[1] sda3[0] 8385856 blocks [2/2] [UU] md3 : active raid1 sdb5[1] sda5[0] 194563072 blocks [2/2] [UU] md0 : active raid1 sdb1[1] 104320 blocks [2/1] [_U] unused devices: <none>
Notes
Note the following lines in the preceding output:
Description
This command enables to add a device into an existing active array. It should automatically trigger the array to rebuild itself with the added drive partition. This command can be issued once for each failed array.
Syntax
mdadm /dev/md[0-n] -a /dev/sda[1-n]
Example
mdadm /dev/md0 –a /dev/sda1
Example Output
The output displays the addition of the failed partition /dev/sda1 into array /dev/md0)
[root@hostname ~]# mdadm /dev/md0 -a /dev/sda1 mdadm: hot added /dev/sda1 [root@hostname ~]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 sdb2[1] sda2[0] 40957632 blocks [2/2] [UU] md1 : active raid1 sdb3[1] sda3[0] 8385856 blocks [2/2] [UU] md3 : active raid1 sdb5[1] sda5[0] 194563072 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 104320 blocks [2/1] [_U] [===>.................] recovery = 16.7% (32671872/194563072) finish=84.8min speed=31789K/sec unused devices: <none>
Notes:
After recovery is complete, the following output is seen when the command cat /proc/mdstat is issued.
[root@hostname ~]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 sdb2[1] sda2[0] 40957632 blocks [2/2] [UU] md1 : active raid1 sdb3[1] sda3[0] 8385856 blocks [2/2] [UU] md3 : active raid1 sdb5[1] sda5[0] 194563072 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 104320 blocks [2/2] [UU] unused devices: <none>
Note: In 104320 blocks [2/2] [UU], the array is in normal state, indicated by [2/2] [UU].
Description
Run the mdadm command that allows to add a failed disk partition back into an array once for each array which immediately begins the rebuild process for that array. Run the command one after another for each of the other arrays because there are multiple arrays defined. Only one array is rebuilt at any time and all others are placed into a queue. Note that any array that is queued for rebuild contains the “resync=DELAYED” notation after running the cat /proc/mdstat command. Each array is rebuilt, one by one, until all are completely rebuilt.
Following is a sample output of rebuilding three of four failed arrays using the commands reviewed in this article on an 8520 Branch Repeater appliance:
[root@hostname ~]# mdadm /dev/md3 -r /dev/sda5 mdadm: hot removed /dev/sda5 [root@hostname ~]# mdadm /dev/md3 -a /dev/sda5 mdadm: hot added /dev/sda5 [root@hostname ~]# mdadm /dev/md1 -r /dev/sda3 mdadm: hot removed /dev/sda3 [root@hostname ~]# mdadm /dev/md1 -a /dev/sda3 mdadm: hot added /dev/sda3 [root@hostname ~]# mdadm /dev/md2 -r /dev/sda2 mdadm: hot removed /dev/sda2 [root@hostname ~]# mdadm /dev/md2 -a /dev/sda2 mdadm: hot added /dev/sda2 [root@hostname ~]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 sda2[2] sdb2[1] 40957632 blocks [2/1] [_U] resync=DELAYED md1 : active raid1 sda3[2] sdb3[1] 8385856 blocks [2/1] [_U] resync=DELAYED md3 : active raid1 sda5[2] sdb5[1] 194563072 blocks [2/1] [_U] [>....................] recovery = 1.9% (3757184/194563072) finish=102.4min speed=31031K/sec md0 : active raid1 sda1[0] sdb1[1] 104320 blocks [2/2] [UU] ß This array is in normal status unused devices: <none>
Notes
Note the following lines in the preceding output: