Commit 5df1f1c2 authored by Nils Knappmeier's avatar Nils Knappmeier

docs: Some more docs for raid1-to-lvm

parent 80cbe2f2
......@@ -2,13 +2,66 @@
I have performed the following steps in order to get from a plain RAID-1 to a LVM-root-partition on the raid1.
**Attention: This is a write-down of the step I have performed. I haven't tested these exact instructions. In fact, I failed to execute them without errors and had to utilize the rescue system to get my system running again.**
**Attention: YOU MAY BREAK YOUR SYSTEM. This is a write-down of the step I have performed. I haven't tested these exact instructions. In fact, I failed to execute them without errors and had to utilize the rescue system to get my system running again.**
The instructions might help other (and my future me) though, so I'm writing them down.
The [mdadm cheat cheet](http://www.ducea.com/2009/03/08/mdadm-cheat-sheet/) has be very helpful.
## 1. Determine the root partition and remove one discs
* Use `mount` to determine the device-mapper file mounted on `/`.
* Use `cat /proc/mdstat` to determine the partitions used by this device.
In my case, I found out the `/dev/md2` was mounted on `/`. It consisted of the partitions `/dev/sda3` and `/dev/sdb3`.
**Make sure you really have a raid1. Before proceeding**
## 2. Remove one partition from the raid
Since raid1 is mirrored, this should not lead to any malfunctions, assuming the the remaining hard-disc does not break.
* Use `mdadm /dev/md2 --fail /dev/sda3 --remove /dev/sda3` to simulate a failure and remove `/dev/sda3` from the array.
## 3. Create a new array on the removed disc
* **If `/dev/md2` contained an LVM, it may be advisable to purge `/dev/sda3` (using `dd if=/dev/zero of=/dev/sda3`) in order to remove any lvm signatures**. Not guarantees, though.
* Use `mdadm --create --verbose /dev/md3 --level=1 --raid-devices=2 /dev/sda3 missing` to create an incomplete raid1 with only the removed disc
## 4. Initialize LVM
* Use `vgcreate vgmain /dev/md3` to create a volume group on the new array
* Use `lvcreate -L400G -nlvmain testvg` to create a new 400 GB logical volume (my VG is 449 GB large, which leaves a comfy 49 GB for snapshots).
You now have `/dev/vgmain/lvmain` available as new root-device
## 5. Transfer filesystem contents
* Use `mkfs.ext4 /dev/vgmain/lvmain` to create a new ext4 filesystem on the new logical volume.
* Mount the new file system `mount /dev/vgmain/lvmain /mnt`
* Use `fsfreeze -f /` to freeze the root file-system. **Attention: You may block the whole system at this point. Keep another ssh-shell open to be able to reboot.** Write operations to the root file-system will be deferred, which means you can now...
* ... use `cp -ax / /mnt` to copy the whole filesystem to the new logical volume.
* Use `fsfreeze -u /` to unfreeze the system.
## 6. Install grub and reboot the system
**This is the point where I made some mistakes and got a non-bootable system.**
I am not sure what exactly I did to install grub and what I did wrong, but some things to consider:
* Use `chroot /mnt`
* `/boot` must be mounted in the chrooted environment
* Make sure that `/var/run/lvm` is avaiable
* There is an "lvm"-module for grub, that might be needed.
* My `/etc/mdadm.conf` did not contain an entry for `md3`.
## 7. Move the remaining partition to the new array
The root-file-system should now be mounted from `/dev/vgmain/lvmain` which resides on `/dev/md3`
* Use `mdadm --stop /dev/md2` to stop the old raid array
* Use `mdadm --remove /dev/md2` to remove it.
* Use `mdadm --add /dev/md3 /dev/sdb3` to add the no unused partition to the new array.
* Monitoring the progress with `watch cat /proc/mdstat`
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment