Applies to SUSE Linux Enterprise Server 15 SP4 Show
This section describes how to set up nested and complex RAID 10 devices. A RAID 10 device consists of nested RAID 1 (mirroring) and RAID 0 (striping) arrays. Nested RAIDs can either be set up as striped mirrors (RAID 1+0) or as mirrored stripes (RAID 0+1). A complex RAID 10 setup also combines mirrors and stripes and additional data security by supporting a higher data redundancy level. 9.1 Creating nested RAID 10 devices with mdadm # A
nested RAID device consists of a RAID array that uses another RAID array as its basic element, instead of using physical disks. The goal of this configuration is to improve the performance and fault tolerance of the RAID. Setting up nested RAID levels is not supported by YaST, but can be done by using the Based on the order of nesting, two different nested RAIDs can be set up. This document uses the following terminology:
The following table describes the advantages and disadvantages of RAID 10 nesting as 1+0 versus 0+1. It assumes that the storage objects you use reside on different disks, each with a dedicated I/O capability. Table 9.1: Nested RAID levels #
9.1.1 Creating nested RAID 10 (1+0) with mdadm #A nested RAID 1+0 is built by creating two or more RAID 1 (mirror) devices, then using them as component devices in a RAID 0. The procedure in this section uses the device names shown in the following table. Ensure that you modify the device names with the names of your own devices. Table 9.2: Scenario for creating a RAID 10 (1+0) by nesting #
9.1.2 Creating nested RAID 10 (0+1) with mdadm #A nested RAID 0+1 is built by creating two to four RAID 0 (striping) devices, then mirroring them as component devices in a RAID 1. In this configuration, spare devices cannot be specified for the underlying RAID 0 devices because RAID 0 cannot tolerate a device loss. If a device fails on one side of the mirror, you must create a replacement RAID 0 device, than add it into the mirror. The procedure in this section uses the device names shown in the following table. Ensure that you modify the device names with the names of your own devices. Table 9.3: Scenario for creating a RAID 10 (0+1) by nesting #
9.2 Creating a complex RAID 10 # YaST (and The complex RAID 10 is similar in purpose to a nested RAID 10 (1+0), but differs in the following ways: Table 9.4: Complex RAID 10 compared to nested RAID 10 #
9.2.1 Number of devices and replicas in the complex RAID 10 #When configuring a complex RAID 10 array, you must specify the number of replicas of each data block that are required. The default number of replicas is two, but the value can be two to the number of devices in the array. You must use at least as many component devices as the number of replicas you specify. However, the number of component devices in a RAID 10 array does not need to be a multiple of the number of replicas of each data block. The effective storage size is the number of devices divided by the number of replicas. For example, if you specify two replicas for an array created with five component devices, a copy of each block is stored on two different devices. The effective storage size for one copy of all data is 5/2 or 2.5 times the size of a component device. 9.2.2 Layout #The complex RAID 10 setup supports three different layouts which define how the data blocks are arranged on the disks. The available layouts are near (default), far and offset. They have different performance characteristics, so it is important to choose the right layout for your workload. 9.2.2.1 Near layout #With the near layout, copies of a block of data are striped near each other on different component devices. That is, multiple copies of one data block are at similar offsets in different devices. Near is the default layout for RAID 10. For example, if you use an odd number of component devices and two copies of data, some copies are perhaps one chunk further into the device. The near layout for the complex RAID 10 yields read and write performance similar to RAID 0 over half the number of drives. Near layout with an even number of disks and two replicas: sda1 sdb1 sdc1 sde1 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 Near layout with an odd number of disks and two replicas: sda1 sdb1 sdc1 sde1 sdf1 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11 12 9.2.2.2 Far layout #The far layout stripes data over the early part of all drives, then stripes a second copy of the data over the later part of all drives, making sure that all copies of a block are on different drives. The second set of values starts halfway through the component drives. With a far layout, the read performance of the complex RAID 10 is similar to a RAID 0 over the full number of drives, but write performance is substantially slower than a RAID 0 because there is more seeking of the drive heads. It is best used for read-intensive operations such as for read-only file servers. The speed of the RAID 10 for writing is similar to other mirrored RAID types, like RAID 1 and RAID 10 using near layout, as the elevator of the file system schedules the writes in a more optimal way than raw writing. Using RAID 10 in the far layout is well suited for mirrored writing applications. Far layout with an even number of disks and two replicas: sda1 sdb1 sdc1 sde1 0 1 2 3 4 5 6 7 . . . 3 0 1 2 7 4 5 6 Far layout with an odd number of disks and two replicas: sda1 sdb1 sdc1 sde1 sdf1 0 1 2 3 4 5 6 7 8 9 . . . 4 0 1 2 3 9 5 6 7 8 9.2.2.3 Offset layout #The offset layout duplicates stripes so that the multiple copies of a given chunk are laid out on consecutive drives and at consecutive offsets. Effectively, each stripe is duplicated and the copies are offset by one device. This should give similar read characteristics to a far layout if a suitably large chunk size is used, but without as much seeking for writes. Offset layout with an even number of disks and two replicas: sda1 sdb1 sdc1 sde1 0 1 2 3 3 0 1 2 4 5 6 7 7 4 5 6 8 9 10 11 11 8 9 10 Offset layout with an odd number of disks and two replicas: sda1 sdb1 sdc1 sde1 sdf1 0 1 2 3 4 4 0 1 2 3 5 6 7 8 9 9 5 6 7 8 10 11 12 13 14 14 10 11 12 13 9.2.2.4 Specifying the number of replicas and the layout with YaST and mdadm # The number of replicas and the layout is specified as in YaST or with the
Specify fN Specify oN Specify Note: Number of replicas YaST automatically offers a selection of all possible values for the parameter. 9.2.3 Creating a complex RAID 10 with the YaST partitioner #
9.2.4 Creating a complex RAID 10 with mdadm #The procedure in this section uses the device names shown in the following table. Ensure that you modify the device names with the names of your own devices. Table 9.5: Scenario for creating a RAID 10 using mdadm #
What command is used to manage a software RAID configuration?The utility that we will be using to manage and setup software RAID is mdadm. This command allows you to create software RAID and also help manage your RAID setup. Figure 4.1 shows the command used to create our software RAID1. Table 2 explains what each qualifier is used for.
Which of the following command is used to configure Linux software RAID?We will use /dev/sdc and /dev/sdd disk to create RAID 0 Array from disks. We will create two partitions in /dev/sdb and later use them to create another RAID 0 Array from partitions. --create:- This option is used to create a new md (RAID) device. --verbose:- This option is used to view the real time update of process.
What file in the proc directory contains information about a computer's CPU?/proc/cpuinfo: This virtual file identifies the type of processor used by your system.
Which directory contains system commands and utilities?The /usr directory contains most of the files, libraries, programs, and system utilities. The /bin folder is symbolically linked to /usr/bin.
|