RAID Mirroring consists of an exact copy (or mirror) of a set of data on two or more disks; a classic RAID 1 mirrored pair contains two disks. This configuration offers no parity, striping, or spanning of disk space across multiple disks, since the data is mirrored on all disks belonging to the array, and the array can only be as big as the smallest member disk. This layout is useful when read performance or reliability is more important than write performance or the resulting data storage capacity.
The array will continue to operate so long as at least one member drive is operational.
Any read request can be serviced and handled by any drive in the array; thus, depending on the nature of I/O load, random read performance of a RAID 1 array may equal up to the sum of each member's performance, while the write performance remains at the level of a single disk. However, if disks with different speeds are used in a RAID 1 array, overall write performance is equal to the speed of the slowest disk.
- My Server Setup.
After following the guideline:
We will setup disk 1 and disk 2 as a software RAID 1 disk.
HostName | : node1.server.lab |
IP Address | : 10.0.6.30 |
Disk 0 [300 GB] | : /dev/sda |
Disk 1 [ 1 TB] | : /dev/sdb |
Disk 2 [ 1 TB] | : /dev/sdc |
- Drive Partitioning for RAID.
We’re using minimum two partitions /dev/sdb and /dev/sdc for creating RAID1. Let’s create partitions on these two drives using ‘parted‘ command and change the type to raid during partition creation.
[root@node1 ~]# parted --script /dev/sdb "mklabel gpt"
[root@node1 ~]# parted --script /dev/sdc "mklabel gpt"
[root@node1 ~]# parted --script /dev/sdb "mkpart primary 0% 100%"
[root@node1 ~]# parted --script /dev/sdc "mkpart primary 0% 100%"
[root@node1 ~]# parted --script /dev/sdb "set 1 raid on"
[root@node1 ~]# parted --script /dev/sdc "set 1 raid on"
[root@node1 ~]# parted --script /dev/sdc "mklabel gpt"
[root@node1 ~]# parted --script /dev/sdb "mkpart primary 0% 100%"
[root@node1 ~]# parted --script /dev/sdc "mkpart primary 0% 100%"
[root@node1 ~]# parted --script /dev/sdb "set 1 raid on"
[root@node1 ~]# parted --script /dev/sdc "set 1 raid on"
- Install software.
We’re using mdadm utility for creating and managing RAID in Linux. So, let’s install the mdadm software package.
[root@node1 ~]# yum -y install mdadm
Print contents of the metadata stored on the named device(s).
[root@node1 ~]# mdadm -E /dev/sd[b-c]
/dev/sdb:
MBR Magic : aa55
Partition[0] : 1953525167 sectors at 1 (type ee)
/dev/sdc:
MBR Magic : aa55
Partition[0] : 1953525167 sectors at 1 (type ee)
/dev/sdb:
MBR Magic : aa55
Partition[0] : 1953525167 sectors at 1 (type ee)
/dev/sdc:
MBR Magic : aa55
Partition[0] : 1953525167 sectors at 1 (type ee)
- Creating RAID1 Device.
Create RAID1 Device called ‘/dev/md0‘ using the following command and verify it
[root@node1 ~]# mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1
mdadm: /dev/sdb1 appears to contain an ext2fs file system
size=524276K mtime=Sat Feb 2 12:31:18 2013
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@node1 ~]# watch -n 1 cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
976629760 blocks super 1.2 [2/2] [UU]
bitmap: 0/8 pages [0KB], 65536KB chunk
unused devices:
mdadm: /dev/sdb1 appears to contain an ext2fs file system
size=524276K mtime=Sat Feb 2 12:31:18 2013
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
# Show status
[root@node1 ~]# watch -n 1 cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
976629760 blocks super 1.2 [2/2] [UU]
[>....................] resync = 1.9% (19027072/976629760) finish=118.8min speed=134332K/sec
bitmap: 8/8 pages [32KB], 65536KB chunk
unused devices:
[root@node1 ~]# watch -n 1 cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
976629760 blocks super 1.2 [2/2] [UU]
bitmap: 0/8 pages [0KB], 65536KB chunk
unused devices:
Add RAID devices which you'd like to check by Cron.
[root@node1 ~]# grep 'CHECK_DEVS=""' /etc/sysconfig/raid-check && sed 's|^\(CHECK_DEVS=\).*|\1"md0"|' /etc/sysconfig/raid-check
Check the raid devices type and raid array using following commands.
[root@node1 ~]# mdadm -E /dev/sd[b-c]1
/dev/sdb1:
/dev/sdc1:
[root@node1 ~]# mdadm --detail /dev/md0
/dev/md0:
/dev/sdb1:
Magic : | a92b4efc |
Version : | 1.2 |
Feature Map : | 0x1 |
Array UUID : | b77ce785:e0a0ca8a:2cb82327:33872cf3 |
Name : | node1.server.lab (local to host node1.server.lab) |
Creation Time : | Sun *** * 12:20:16 20** |
Raid level : | raid1 |
Raid Devices : | 2 |
Avail Dev Size : | 1953259520 (931.39 GiB 1000.07 GB) |
Array Size : | 976629760 (931.39 GiB 1000.07 GB) |
Data Offset : | 262144 sectors |
Super Offset : | 8 sectors |
Unused Space : | before=262056 sectors, after=0 sectors |
State : | clean |
Device UUID : | 74fa6b02:ab8055e9:266f9666:4d4dc81d |
Internal Bitmap : | 8 sectors from superblock |
Update Time : | Sun *** * 14:52:23 20** |
Bad Block Log : | 512 entries available at offset 72 sectors |
Checksum : | 2488d56 - correct |
Events : | 1851 |
Device Role : | Active device 0 |
Array State : | AA ('A' == active, '.' == missing, 'R' == replacing) |
/dev/sdc1:
Magic : | a92b4efc |
Version : | 1.2 |
Feature Map : | 0x1 |
Array UUID : | b77ce785:e0a0ca8a:2cb82327:33872cf3 |
Name : | node1.server.lab (local to host node1.server.lab) |
Creation Time : | Sun *** * 12:20:16 20** |
Raid level : | raid1 |
Raid Devices : | 2 |
Avail Dev Size : | 1953259520 (931.39 GiB 1000.07 GB) |
Array Size : | 976629760 (931.39 GiB 1000.07 GB) |
Data Offset : | 262144 sectors |
Super Offset : | 8 sectors |
Unused Space : | before=262056 sectors, after=0 sectors |
State : | clean |
Device UUID : | b4d96c0c:ecc38869:3bbc7b86:36ece61b |
Internal Bitmap : | 8 sectors from superblock |
Update Time : | Sun *** * 14:52:23 20** |
Bad Block Log : | 512 entries available at offset 72 sectors |
Checksum : | aa809bd5 - correct |
Events : | 1851 |
Device Role : | Active device 1 |
Array State : | AA ('A' == active, '.' == missing, 'R' == replacing) |
[root@node1 ~]# mdadm --detail /dev/md0
/dev/md0:
Version : | 2 |
Creation Time : | Sun *** * 12:20:16 20** |
Raid Level : | raid1 |
Array Size : | 976629760 (931.39 GiB 1000.07 GB) |
Used Dev Size : | 976629760 (931.39 GiB 1000.07 GB) |
Raid Devices : | 2 |
Total Devices : | 2 |
Persistence : | Superblock is persistent |
Intent Bitmap : | Internal |
Update Time : | Sun *** * 14:52:23 20** |
State : | clean |
Active Devices : | 2 |
Working Devices : | 2 |
Failed Devices : | 0 |
Spare Devices : | 0 |
Name : | node1.server.lab (local to host node1.server.lab) |
UUID : | b77ce785:e0a0ca8a:2cb82327:33872cf3 |
Events : | 1851 |
- Creating File System on RAID Device.
5.1 Creating Physical Volume
Use the pvcreate command to initialize a block device to be used as a physical volume. Initialization is analogous to formatting a file system.
[root@node1 ~]# pvcreate /md0
The pvscan command scans all supported LVM block devices in the system for physical volumes.
[root@node1 ~]# pvscan | grep md0
PV /dev/md0 lvm2 [931.38 GiB]
PV /dev/md0 lvm2 [931.38 GiB]
5.2 Creating Volume groups
To create a volume group from one or more physical volumes, use the vgcreate command. The vgcreate command creates a new volume group by name and adds at least one physical volume to it.
[root@node1 ~]# vgcreate vg_md0 /dev/md0
Volume group "vg_md0" successfully created
Volume group "vg_md0" successfully created
The vgscan command scans all supported disk devices in the system looking for LVM physical volumes and volume groups. This builds the LVM cache in the /etc/lvm/.cache file, which maintains a listing of current LVM devices.
[root@node1 ~]# vgscan | grep md0
Found volume group "vg_md0" using metadata type lvm2
Found volume group "vg_md0" using metadata type lvm2
5.3 Creating Logical Volumes
You can also use the -l argument of the lvcreate command to specify the percentage of the remaining free space in a volume group as the size of the logical volume. The following command creates a logical volume called storage that uses all of the unallocated space in the volume group vg_md0.
[root@node1 ~]# lvcreate -l 100%FREE -n storage vg_md0
WARNING: xfs signature detected on /dev/vg_md0/storage at offset 0. Wipe it? [y/n]: y
Wiping xfs signature on /dev/vg_md0/storage.
Logical volume "storage" created
WARNING: xfs signature detected on /dev/vg_md0/storage at offset 0. Wipe it? [y/n]: y
Wiping xfs signature on /dev/vg_md0/storage.
Logical volume "storage" created
The lvscan command scans for all logical volumes in the system and lists them, as in the following example.
[root@node1 ~]# lvscan | grep md0
ACTIVE &emps; '/dev/vg_md0/storage' [931.38 GiB] inherent
ACTIVE &emps; '/dev/vg_md0/storage' [931.38 GiB] inherent
5.4 Creating the File System
Create file system using xfs for /dev/vg_md0/storage. XFS allows specifying the partition RAID dimensions to the file-system, and takes them into consideration with file reads/writes, to match the operations. Two parameters are used with XFS upon creation and mounting: sunit, which is the size of each chunk in 512byte blocks, and swidth, which is sunit * amount-of-drives.
[root@node1 ~]# mkfs.xfs -d sunit=512,swidth=1024 /dev/md0
Mount the newly created filesystem under /vm/md0-raid1 and create some files and verify the contents under mount point.
[root@node1 ~]# mkdir -p /vm/storage-1TB
[root@node1 ~]# restorecon -R /vm/storage-1TB
[root@node1 ~]# mount -o sunit=512,swidth=1024,noatime,nodiratime,allocsize=64m /dev/vg_md0/storage /vm/storage-1TB
[root@node1 ~]# touch /vm/storage-1TB/README.txt
[root@node1 ~]# echo -e "RAID disk Size : 1TB\nRAID Level : raid1" >> /vm/md0-raid1/README.txt
[root@node1 ~]# restorecon -R /vm/storage-1TB
[root@node1 ~]# mount -o sunit=512,swidth=1024,noatime,nodiratime,allocsize=64m /dev/vg_md0/storage /vm/storage-1TB
[root@node1 ~]# touch /vm/storage-1TB/README.txt
[root@node1 ~]# echo -e "RAID disk Size : 1TB\nRAID Level : raid1" >> /vm/md0-raid1/README.txt
To auto-mount RAID1 on system reboot, you need to make an entry in fstab file.
[root@node1 ~]# grep "/vm/storage-1TB" /etc/mtab | wc -l; then grep "/vm/storage-1TB" /etc/mtab >> /etc/fstab; fi
1
1
Run ‘mount -a‘ to check whether there are any errors in fstab entry.
[root@node1 ~]# mount -av
/ | : ignored |
/boot | : already mounted |
/home | : already mounted |
swap | : ignored |
/vm/storage-1TB | : already mounted |
Next, save the raid configuration manually to ‘mdadm.conf‘ file using the below command.
[root@node1 ~]# mdadm --detail --scan --verbose >> /etc/mdadm.conf
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=node.server.lab:0 UUID=b77ce785:e0a0ca8a:2cb82327:33872cf3
devices=/dev/sdb1,/dev/sdc1
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=node.server.lab:0 UUID=b77ce785:e0a0ca8a:2cb82327:33872cf3
devices=/dev/sdb1,/dev/sdc1
Geen opmerkingen:
Een reactie posten