Repeater RAID Command Usage

Repeater RAID Command Usage

book

Article ID: CTX131048

calendar_today

Updated On:

Description

This article contains additional information on commands used in breaking, repairing, and monitoring RAID arrays on 85xx/88xx Branch Repeater appliances.

Show Summary Status of Arrays

Description

This command displays the current state of all RAID arrays defined on the Branch Repeater appliances.

Command

cat /proc/mdstat

Example Output in a Normal Condition

Personalities : [raid1]
md2 : active raid1 sdb2[1] sda2[0]
40957632 blocks [2/2] [UU]
md1 : active raid1 sdb3[1] sda3[0]
8385856 blocks [2/2] [UU]
md3 : active raid1 sdb5[1] sda5[0]
194563072 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda1[0]
104320 blocks [2/2] [UU]
unused devices: <none>

Example Output in a Failed Condition

Personalities : [raid1]
md2 : active raid1 sdb2[1] sda2[2](F) 
40957632 blocks [2/1] [_U] 
md1 : active raid1 sdb3[1]
8385856 blocks [2/1] [_U] 
md3 : active raid1 sdb5[1] sda5[2](F) 
194563072 blocks [2/1] [_U] 
md0 : active raid1 sdb1[1] sda1[0](F) 
104320 blocks [2/1] [_U] 
unused devices: <none>

Notes

Note the following lines in the preceding output:

  • In md2 : active raid1 sdb2[1] sda2[2](F), note the degraded state of sda1 indicated by (F).
  • In 40957632 blocks [2/1] [_U], note the degraded state of array indicated by [2/1] [_U]. Normal state is [2/2] [UU].
  • In md1 : active raid1 sdb3[1], note that the no sda3[0] reference listed should contain two drives listed.

Command Showing More Details on Array

Description

This command provides detailed information on an array defined on the Branch Repeater appliance. It shows the current state of the array including the disk partitions that are a part of the array. If clean is displayed as the state, the array does not require to be fixed. If clean, degraded is displayed as the state, the array requires to be fixed.

Syntax

mdadm -Q -D /dev/md[0-n]

Example

mdadm -Q -D /dev/md0

Example Output

[root@hostname ~]# mdadm -Q -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Fri Nov 16 02:06:28 2007
Raid Level : raid1
Array Size : 104320 (101.88 MiB 106.82 MB)
Device Size : 104320 (101.88 MiB 106.82 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Sep 12 14:31:09 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
UUID : 5613704f:c2453154:62dc1086:12b719c8
Events : 0.28

Special Notes on Drive State/Status

  • The state of the drive state can be clean, dirty, failed or missing.
  • Order of severity is clean, dirty, failed, or missing.
  • A rebuild fixes the array if the state is dirty. It might fix the array if the status is failed, and missing. If a Repeater appliance is restarted with a drive in a dirty state, the drive might go into a dirty state.
  • Dirty implies two mirrors are out of sync and rebuild is not attempted. If rebuild is attempted and it fails, the drive is marked as failed or missing depending on the severity of the cause. This might also occur if the System.MD* parameters are off in the parameters.php page. This implies that the raid status thread in the server is not running and therefore an automatic rebuild is attempted.

Forcing a RAID Array to Fail

Description

This command enables to force a RAID failure on a specific array. This is most useful when working in a lab environment, and exprementing to understand the functionality of various commands.

Syntax

mdadm /dev/md[0-n] -f /dev/sda[1-n]

Example

mdadm /dev/md0 –f /dev/sda1

Example Output

The output displays a forced failure of the /dev/sda1 partition in the /dev/md0 array. Following is the sample output.

[root@hostname ~]# mdadm /dev/md0 -f /dev/sda1
mdadm: set /dev/sda1 faulty in /dev/md0
[root@hostname ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb2[1] sda2[0]
40957632 blocks [2/2] [UU]
md1 : active raid1 sdb3[1] sda3[0]
8385856 blocks [2/2] [UU]

md3 : active raid1 sdb5[1] sda5[0]
194563072 blocks [2/2] [UU]
md0 : active raid1 sda1[2](F) sdb1[1]
104320 blocks [2/1] [_U] 
unused devices: <none>

Notes

Note the following lines in the preceding output:

  • In md0 : active raid1 sda1[2](F) sdb1[1], note the degraded state of sda1 indicated by (F).
  • In 104320 blocks [2/1] [_U] , note the degraded state of array indicated by [_U]. Normal state is [UU].

Removing a Disk Partition from degraded RAID Array

Description

This command enables to remove a failed device from a specific array. It is useful if one of the disk partitions is in a failed condition. In most cases, the command must be issued before adding the failed disk partition back into a degraded array.

Syntax

mdadm /dev/md[0-n] -r /dev/sda[1-n]

Example

mdadm /dev/md0 –r /dev/sda1

Example Output:

The output displays the removal of the failed partition /dev/sda1 from array /dev/md0). Following is the sample output.

[root@hostname ~]# mdadm /dev/md0 -r /dev/sda1
mdadm: hot removed /dev/sda1
[root@hostname ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb2[1] sda2[0]
40957632 blocks [2/2] [UU]
md1 : active raid1 sdb3[1] sda3[0]
8385856 blocks [2/2] [UU]
md3 : active raid1 sdb5[1] sda5[0]
194563072 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] 
104320 blocks [2/1] [_U]
unused devices: <none>

Notes

Note the following lines in the preceding output:

  • In md0 : active raid1 sdb1[1], note the absence of sda1[2](F) notation.
  • In 104320 blocks [2/1] [_U], note that the array is still in degraded mode as shown by [2/1] [_U].

Adding a Disk Partition into a Degraded RAID Array

Description

This command enables to add a device into an existing active array. It should automatically trigger the array to rebuild itself with the added drive partition. This command can be issued once for each failed array.

Syntax

mdadm /dev/md[0-n] -a /dev/sda[1-n]

Example

mdadm /dev/md0 –a /dev/sda1

Example Output

The output displays the addition of the failed partition /dev/sda1 into array /dev/md0)

[root@hostname ~]# mdadm /dev/md0 -a /dev/sda1
mdadm: hot added /dev/sda1
[root@hostname ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb2[1] sda2[0]
40957632 blocks [2/2] [UU]
md1 : active raid1 sdb3[1] sda3[0]
8385856 blocks [2/2] [UU]
md3 : active raid1 sdb5[1] sda5[0]
194563072 blocks [2/2] [UU]
md0 : active raid1 sda1[0] sdb1[1]
104320 blocks [2/1] [_U]
[===>.................]  recovery = 16.7% (32671872/194563072) finish=84.8min speed=31789K/sec
unused devices: <none>

Notes:

  • Note the recovery of array
  • In the line, md0 : active raid1 sda1[0] sdb1[1], note the addition of sda1[0] back into the array.

After recovery is complete, the following output is seen when the command cat /proc/mdstat is issued.

[root@hostname ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb2[1] sda2[0]
40957632 blocks [2/2] [UU]
md1 : active raid1 sdb3[1] sda3[0]
8385856 blocks [2/2] [UU]
md3 : active raid1 sdb5[1] sda5[0]
194563072 blocks [2/2] [UU]
md0 : active raid1 sda1[0] sdb1[1] 
104320 blocks [2/2] [UU] 
unused devices: <none>

Note: In 104320 blocks [2/2] [UU], the array is in normal state, indicated by [2/2] [UU].

Running Add Disk Partition Command on All Arrays at Once

Description

Run the mdadm command that allows to add a failed disk partition back into an array once for each array which immediately begins the rebuild process for that array. Run the command one after another for each of the other arrays because there are multiple arrays defined. Only one array is rebuilt at any time and all others are placed into a queue. Note that any array that is queued for rebuild contains the “resync=DELAYED” notation after running the cat /proc/mdstat command. Each array is rebuilt, one by one, until all are completely rebuilt.

Following is a sample output of rebuilding three of four failed arrays using the commands reviewed in this article on an 8520 Branch Repeater appliance:

[root@hostname ~]# mdadm /dev/md3 -r /dev/sda5
mdadm: hot removed /dev/sda5
[root@hostname ~]# mdadm /dev/md3 -a /dev/sda5
mdadm: hot added /dev/sda5
[root@hostname ~]# mdadm /dev/md1 -r /dev/sda3
mdadm: hot removed /dev/sda3
[root@hostname ~]# mdadm /dev/md1 -a /dev/sda3
mdadm: hot added /dev/sda3
[root@hostname ~]# mdadm /dev/md2 -r /dev/sda2
mdadm: hot removed /dev/sda2
[root@hostname ~]# mdadm /dev/md2 -a /dev/sda2
mdadm: hot added /dev/sda2
[root@hostname ~]# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda2[2] sdb2[1]
40957632 blocks [2/1] [_U]
resync=DELAYED 
md1 : active raid1 sda3[2] sdb3[1]
8385856 blocks [2/1] [_U]
resync=DELAYED 
md3 : active raid1 sda5[2] sdb5[1]
194563072 blocks [2/1] [_U]
[>....................] recovery = 1.9% (3757184/194563072) finish=102.4min speed=31031K/sec 
md0 : active raid1 sda1[0] sdb1[1]
104320 blocks [2/2] [UU] ß This array is in normal status
unused devices: <none>

Notes

Note the following lines in the preceding output:

  • resync=DELAYED, indicates that the array is queued for rebuild.
  • recovery = 1.9% (3757184/194563072) finish=102.4min speed=31031K/sec, indicates that this array is currently being rebuilt.
  • 104320 blocks [2/2] [UU], indicates that this array is in the normal state. 

Additional Resources

Linux Software RAID

Issue/Introduction

This article contains additional information on commands used in breaking, repairing, and monitoring RAID arrays on 85xx/88xx Branch Repeater appliances.