There is a CentOS 6 on a 1 Tb disk, you need to add another 1 disk to a 1 Tb disk and make 2 disks of RAID 1 without interrupting the system for a long time. What is the best way to do this? They say that it is possible to do this from the current disk on the fly, but it seems to me that the re-layout of the disk for RAID 1 will break the current disk layout. Is this true and if so which way is better? Thank.

  • Can the system be random or special on LVM? - Fine
  • On LVM just. - Mihail Politaev

1 answer 1

So, you are lucky because the system is on LVM , it greatly simplifies the procedure and allows you to migrate to raid without any downtime.

Only one small "but" - you need somewhere to get 1-2 megabytes of unmarked space on the PV - this place is necessary for the superblock raid.

Do (all from the root)

 pvdisplay 

If there is at least one Free PE then everything should work out. In theory, it should work out and if there is a not usable area at least in 1 or 2 megabytes with disks of equal capacity. Suppose that the place turned out and proceed to the migration:

To begin with, using favorite (any, fdisk , parted , whatever) disk layout utility we create the desired disk layout, for example, as an existing disk. Suppose it is /dev/sdb1 for /boot and /dev/sdb2 for data.

Next, create a pair of arrays raid1 in the initially degraded form:

 mdadm --create /dev/md0 -l 1 -n 2 missing /dev/sdb1 mdadm --create /dev/md1 -l 1 -n 2 missing /dev/sdb2 

(in general, grub may already be able to boot from lvm over md , have not tried it for a long time)

Then we initialize the array with the data as a new LVM volume:

 pvinit /dev/md2 

Expand the used space to a new disk:

 vgextend имя_Π³Ρ€ΡƒΠΏΠΏΡ‹_Ρ‚ΠΎΠΌΠΎΠ² /dev/md2 

Volume group name - this was the VG Name in pvdisplay

Now the main focus: transfer data from the existing single disk to the new array:

 pvmove /dev/sda2 /dev/md2 

It is worth running in screen or tmux , the thing is very long. works at the block level and will copy the entire capacity of Allocated PE . But reports on the percentage of completion.

When, probably a few hours later, the command completes, you need to run pvdisplay again. You will see two Physical volume , the old /dev/sdb2 in this example should be with Allocated PE equal to 0. The new one is correspondingly busy. If this is not so, then it is necessary to reduce some logical volume, which I will write about in the end.

So, the old physical disk now does not contain data in LVM and it can be removed from the volume group:

  vgreduce имя_Π³Ρ€ΡƒΠΏΠΏΡ‹_Ρ‚ΠΎΠΌΠΎΠ² /dev/sda2 pvremove /dev/sda2 

Now the data is already located on RAID1 and it remains to add the disk to this array:

  mdadm --manage /dev/md1 --add /dev/sda2 

The background process of rebuilding an array can be observed in cat /proc/mdstat

It remains to move the /boot partition. But this is such a simple thing that can be considered read-only almost always. Therefore, we will prepare a new file system.

 mkfs.ext4 /dev/md0 mount /dev/md0 /mnt/ rsync -av /boot/ /mnt/ umount /mnt umount /boot mount /dev/md0 /boot 

I think creating a file system, mounting and transferring content via rsync will not cause surprise.

Then you need to add the second section in the array with the loader

 mdadm --manage /dev/md0 --add /dev/sda1 grub-install /dev/sda1 grub-install /dev/sdb1 

Do not forget to fix /etc/fstab . /boot changed. But the root and other file systems remained where they were with the old UUID and names.

Now we will create the mdadm configuration file (it is not necessary at all, but it must guarantee that the system will not rename the arrays at boot time)

 echo "DEVICE partitions" > /etc/mdadm/mdadm.conf mdadm --detail --scan --verbose | awk '/ARRAY/ {print}' >> /etc/mdadm/mdadm.conf 

And you need to update the initramfs via update-initramfs .

When the arrays finish syncing, you can reboot for verification.

PS: I strongly advise you to first check on the virtual for your distributor. I could forget about anything by chance, for a long time I hadn’t perverted like this.


If not lucky and not enough disk space. It is necessary to saw off the necessary piece from any logical volume LVM. Or maybe it was worth a little /boot bit smaller, by the way.

You must first reduce the size of the file system (it depends on what kind of file system it is, for example, XFS cannot, in principle, decrease, and many others require to be unmounted to reduce). I recommend reducing the size of the file system to more than is required in order not to accidentally cut the used blocks - lvreduce does not have a check for the size of the file system, not that level.

Then reduce the size of the logical volume (in the example, 1GB is reduced)

 lvreduce -L-1G /dev/имя_Π³Ρ€ΡƒΠΏΠΏΡ‹_Ρ‚ΠΎΠΌΠΎΠ²/ΡƒΠΌΠ΅Π½ΡŒΡˆΠ°Π΅ΠΌΡ‹ΠΉ_логичСский_диск 
  • Thank! Made, almost according to your recommendation. - Mihail Politaev