Hello World!
It’s been a few months since my last post.. I’ve definitely let my blog go by the wayside. It just seems I’ve been running out of things to tinker with. But, when I do get around to doing something that I can’t find much information on while googling I try and document it somewhere.
Recently, I acquired a 1TB drive for my
Gaming Desktop. I wasn’t able to afford additional hard drives when I purchased that system and as such have just been running with a 2GB swap, 20GB root, and ~478GB for a home. No fault tolerance, and no way to add more space when my home gets full aside adding a new drive and making a new partition and mount-point. So, when I got the 1TB drive I came up with the best I could do under the circumstances (I believe).
Optimally, you want the same size hard drives for a RAID array. However, with linux software raid it’s really irrelevant as it just uses partitions of a drive, not whole drives themselves. So I should be able to RAID 1 a 500GB partition on the 500GB drive (whole disk) and a 500GB partition on the 1TB drive (so half the drive) and put an LVM on top for future expansion. Now, googling for LVM on Raid for Ubuntu will invariably get you lots of results, like this one or this one.
Unfortunately, almost all of the documentation is old with 9.04 or previous being the ones setup on, and typically using LILO if true LVM on Raid or having a /boot outside the LVM for grub and most using the alternative installer CD. With 9.10 and up getting grub2 by default it’s supposed to have support for booting directly to LVM partitions now negating the need for a separate /boot or LILO.
I wanted to know if it would be possible to manually setup my drives using the standard mdadm and lvm2 commands, and then use the graphical installer to select my LVM’s, install, and boot. This is what I did.
So for a first step I put in the standard 64-Bit Ubuntu 10.04 LiveCD into my drive and booted. I selected the “Try Ubuntu 10.04 Now” option at the new graphical menu that appears and got to a desktop.
I found my two drives and realized my new 1TB drive is now being identified as /dev/sda and the old 500GB drive is /dev/sdb.
So the first things first, I had a bunch of data on my existing 500GB drive that I wanted to make sure I didn’t lose. Keep in mind, one possible solution here (if you only got another 500GB drive) is to make a degraded raid 1 on the new drive, then copy the data, then “add” the original drive into the array which will copy the data back. Since I had a 1TB drive though, I had extra space so I created a roughly 400GB partition as /dev/sda4 at the back of the 1TB drive and copied my data over. Problem solved.
In my searches I came across this blog which did half what I wanted, which was get RAID 1 setup, but didn’t do LVM. For that I just referred to a standard Linux LVM document and modified the commands to fit my needs starting at step 3.
So, beginning with the raid I dropped to a terminal and obtained root:
sudo su –
Then I had to install the mdadm and lvm2 tools as they do not come with the standard LiveCD (therefore, you need internet or download these deb’s and their dependencies).
apt-get update
apt-get install mdadm lvm2
Strangely, mdadm pulls postfix (an SMTP client/server) down with it. Why raid tools would require SMTP is beyond me but it did. I simply selected No Configuration and moved on with my life.
So now that I have access to mdadm and the various tools such as pvcreate, vgcreate, lvcreate, etc. I can start to build my setup. My first step is to modify my old 500GB drive (/dev/sdb), destroy all my existing partitions, and create one big full drive partition. I used GParted for this so we can use as many GUI’s as possible.
Whats important is to note the exact size of the partition. GParted nicely pre-fills the whole drive space in by default when you create a new partition so simply copy the size, leaving zero at the beginning and end and create the partition. Remember to leave unformatted.
Now we go to the 1TB drive (/dev/sda) and we see my temporary partition at the end of the drive holding my data, and we create a new partition at the beginning of the drive pasting the exact same size. We then commit the changes.
Now we want to make the devices id’s set to “Linux raid autodetect”. We do this by opening a terminal and using fdisk:
fdisk /dev/sda
t
fd
w
Note: If you have more than 1 partition on the drive you’ll need to select the right one before using “fd”.
We now have two partitions, on separate drives, with the exact same sizes. I like to verify so what we’re looking for is:
fdisk -l
We can see the block size is identical (in my case, 488384001).
Now we go ahead and create our level 1 raid.
mdadm –create /dev/md0 –verbose –level=1 –raid-devices=2 /dev/sda1 /dev/sdb1
(double -‘s before create, verbose, level and raid-devices)
Now, we need to check the raid status:
cat /proc/mdstat
Uh oh, the raid array is building itself. So before we continue we should wait until that is finished. An alternative method would be the use the keyword “missing” in the above command in place of either the /dev/sdb1 or /dev/sda1 and make a degraded array. You can then continue with the installation and not wait for the raid to build itself properly. After you’re finished installing you can add the second drive in and let it rebuild then. There may be no negatives doing it this way.. I just preferred to get it out of the way now.
This command will help you watch the building process:
watch -n0.2 cat /proc/mdstat
When it’s finally complete, we can create our physical volume for our LVM on /dev/md0.
pvcreate /dev/md0
Now we need a new volume group (system_vg, home, root, and swap may all be changed to whatever you want to name your volume group, and logical volumes):
vgcreate system_vg /dev/md0
And finally we create our partitions. I wanted the same as I had before so a 2GB swap, a 20GB root, and the rest home.
lvcreate -L 2048 -n swap system_vg
lvcreate -L 20480 -n root system_vg
Now, at this point I have my swap and root created but I don’t know exactly how much is left on the drive. So I do a quick listing:
vgdisplay
And note the number of Free PE’s on the volume group. To create my home my command will change a little, instead of -L for size I’ll use -l for PE’s:
lvcreate -l 113602 -n home system_vg
Let’s make sure with another vgdisplay that there is now 0.00 free PE’s on the volume group and we’re done with LVM! It will look something like this:
Now, I personally wanted to format my home to make sure that there is no preserved space for root on my home logical volume:
mkfs.ext4 -m 0 /dev/system_vg/home
At this point we are done with the drives. We now have a RAID 1 setup on /dev/md0 across the two drives, with a logical volume on top that can easily be extended later when we get more drive space. I could obviously add the remaining 500GB space from my 1TB to the home space making near 900GB, but I don’t want to do that because that additional 500GB won’t be raided. That means I won’t know which of my data is being stored on the fault tolerant RAID 1 array, and which of it will be lost if I lose the drive.
However, when I get another drive I can then make another raid 1, make it a physical volume, and then add it to my LVM and extend it out easily.
We are now done with drives!
So, now that our drives are ready, we just install Ubuntu like normal by clicking the “Install Ubuntu 10.04” on the Desktop.Go through the standard Language, Keyboard Layout, Timezone, etc. steps. When you get to the hard drive layout step select “Manually select partitions” and click next. In here you’ll see all your created logical volumes, also you’ll see your raid device, and the internal drives. You want to select only your logical volumes and assign them as appropriate (in my case, swap as swap (formatted), / as root (formatted ext4), /home as home (unformatted, I already did it)).
That’s it, don’t select any of the raid or physical drives. Now finish the installation like you normally would.
After the install is complete it will prompt you if you want to continue trying, or reboot. Select continue trying as we need to make last minute changes to our new ubuntu before it will work. Taken from the bottom of the Foobaroos blog:
mount /dev/system_vg/root /target/
mount –bind /dev/ /target/dev/
mount –bind /sys/ /target/sys/
mount –bind /proc/ /target/proc/
chroot /target
apt-get update
apt-get install lvm2 mdadm
grub-install /dev/sda
grub-install /dev/sdb
(double -‘s before each bind)
And that’s it! Ubuntu already knew you were installing and booting to an LVM, so it already configured grub2 for you. You can now reboot and get right up to your freshly installed Ubuntu Desktop.
Next, comes the testing. Did it really work?
Well, my testing consisted of shutting down and pulling the SATA cable of one of the drives and booting up the computer. Right away Ubuntu gives me a huge warning that my raided drive is degraded and asks me if I really want to activate the raid in that status. I said yes and booted all the way to my desktop.
So far so good. Shut back down, put that SATA cable back in and boot up. Check my /proc/mdstat and see:
Yup.. working beautifully. Once that completed I did the exact same thing, but with the other drive disconnected this time. Every test worked exactly as expected.
So, it works! LVM on Raid 1 without using the Alternative Installer disc. It does require some command line work, and some initial setup of the new Ubuntu, but all in all wasn’t that hard.
Hope this helps somebody out there!