This is a continuation of a series. See Part 1 here.
Now that I have the base system installed on a RAID1 array, along with a Local Storage Repo residing there, I want to create an array for "bulk" storage. I have 3 8TB disks I want to put into a RAID5 array. This will give me a 16TB block device I can use as a sort of home NAS. Since this is RAID5, we can more efficiently expand this storage just by adding another 8TB+ disk. Additionally, I can use LVM to effectively split this storage between NAS storage and another Local Storage Repo for larger VM disk allocations.
Prep the disks
Before we do anything, we need to adjust the device timeouts on EACH drive you have installed. It's not uncommon for SATA to take a long time to recover from an error, and if this results in a controller reset, you can risk RAID array failure and data loss. For more info on this, I would recommend reading https://strugglers.net/~andy/blog/2015/11/09/linux-software-raid-and-drive-timeouts/ . This article mentions setting the timeout on the disks. In my case, even though the command to set the timeouts didn't fail, it never seemed to work, as the timeout values could not be listed and verified. Instead, to be safe, I'd recommend applying the 180 second (3 minute) timeout to all the drives you have in your system using this command:
for disk in `ls /sys/block/*/device/timeout` ; do echo 180 > $disk ; done
Then add a udev rule so that it applies on startup AND when new drives are inserted:
cat << EOF > /etc/udev/rules.d/60-set-disk-timeout.rules.test
# Set newly inserted disk I/O timeout to 3 minutes
ACTION=="add", SUBSYSTEMS=="scsi", DRIVERS=="sd", ATTRS{timeout}=="?*", RUN+="/bin/sh -c 'echo 180 >/sys$DEVPATH/timeout'"
EOF
systemctl restart systemd-udevd
Next, I would recommend removing all partitions on each disk and re-setting the disk labels to GPT. Beware! This and pretty much everything to follow is destructive. Be sure you are working with the correct disk! Checking the "dmesg" command output after inserting a new disk is usually how I verify.
parted /dev/sdz
# Respond with Y to confirm all data on the disk will be lost
Partition the drives (replacing /dev/sdz with your disk) . Repeat this for each of your disks:
echo -en "mklabel gpt\nmkpart primary ext4 0% 100%\nset 1 raid on\nalign-check optimal 1\nprint\nquit\n" | parted -a optimal /dev/sdz
RAID5 and LVM Setup
Now, let's build our RAID5 array:
mdadm --create --verbose /dev/md6 --level=5 --raid-devices=3 /dev/sdx /dev/sdy /dev/sdz
# Set up mdmonitor mail
# In /etc/ssmtp/ssmtp.conf:
mailhub=mail.your-smtp-server.com
# In /etc/mdadm.conf:
MAILADDR address@youremail.com
Then partition the resulting md device to have one large LVM partition
gdisk /dev/md6
# Commands within gdisk...
p
# You should see no partitions
n
# Hit Enter (3x) when asked for partition number, first, and last sectors
# Use a Hex code of 8e00 for the filesystem type of Linux LVM
p
# You should then see something like this as your partition table, with a different size, obviously:
Number Start (sector) End (sector) Size Code Name
1 2048 31255572446 14.6 TiB 8E00 Linux LVM
# Write partition table and exit gdisk:
w
# Answer "Y" to confirmation to write GPT data
To clarify the above, here are the commands that are run in gdisk:
(p)rint partition tables
(n)ew partition
(enter) Use partition number 1 (default)
(enter) Start at first sector on device (default)
(enter) End at last sector on device (default)
(8e00) Linux LVM partition code
(p)rint the partition table
(w)rite the partition table and exit
Next, let's set up LVM using the partition:
# Tell the kernel to reload the partition table
partprobe /dev/md6
# Create the Physical Volume
pvcreate --config global{metadata_read_only=0} /dev/md6p1
# Create the Volume Group "BulkStorage00"
vgcreate --config global{metadata_read_only=0} BulkStorage00 /dev/md6p1
# Set the volume groups active so they come back up on reboot
vgchange --config global{metadata_read_only=0} -ay
### ??? Edit /etc/grub.cfg and remove "nolvm" from the "XCP-ng" menu entry
# Not sure if this is needed, but doesn't hurt (rebuild init ram fs)
dracut --force
Activating Volume Groups on Boot
Disclaimer: I know, I know. This is admittedly a hack. There's probably a better way of doing this that fits within the design of XCP-NG that I'm not aware of. If someone suggests a better way of handling this, I'll gladly update here.
When we need to reboot, our Logical Volumes won't be active. I believe this has something to do with XCP-NG/XenServer's unique handling of LVM. I've banged my head on this problem for far too long, so begrudgingly, here's the workaround to it:
# Make rc.local executable
chmod u+x /etc/rc.local
# Enable the rc.local service
systemctl enable rc-local
# Add our vgchange activation command to rc.local
echo "vgchange --config global{metadata_read_only=0} -ay ; sleep 1" >> /etc/rc.d/rc.local
Local Storage Repository
Now let's create a 200GB logical volume (start small as we can expand later)
lvcreate --config global{metadata_read_only=0} -L 200G -n "RAID_5_Storage_Repo" BulkStorage00
And create the Storage Repo in XCP-NG (using the "ext" filesystem for thin provisioning)
xe sr-create content-type=user device-config:device=/dev/disk/by-id/dm-name-BulkStorage00-RAID_5_Storage_Repo host-uuid=<tab_to_autocomplete_your_host_uuid> name-label="RAID 5 Storage Repo" shared=false type=ext
And that's it! The above creates another set of LVM layers on top of your existing BulkStorage00/RAID_5_Storage_Repo logical volume which consists of a physical volume, volume group, and logical volume that are natively managed by XCP-NG. You should now have a 200GB storage repo named "RAID 5 Storage Repo" in XCP-ng Center and/or XOA. Next, let's set up a bulk store to share out via SMB/CIFS...
Bulk Storage Share Setup
First let's allocate 500GB of that space for another Logical Volume named "BulkVolume00"
lvcreate --config global{metadata_read_only=0} -L 500G -n "BulkVolume00" BulkStorage00
And lets format it with EXT4
mkfs.ext4 /dev/BulkStorage00/BulkVolume00
Let's set up /etc/fstab (edit with vi or nano) to mount the device by adding the following line to the end of the file
/dev/BulkStorage00/BulkVolume00 /opt/BulkVolume00 ext4 rw,noauto 0 0
And lets create a mount point for the device and mount it
mkdir /opt/BulkVolume00
mount /opt/BulkVolume00
We'll also want to set this up to mount on boot (note, as above with our Storage Repository, we have to use a workaround to mount this AFTER the logical volumes have been activated
echo "mount /opt/BulkVolume00" >> /etc/rc.d/rc.local
And lastly, lets get that shared out via SMB
# Install Samba server
yum --enablerepo base,updates install samba
# Make a backup copy of the original SMB configuration (just in case)
cp -rp /etc/samba/smb.conf /etc/samba/smb.conf.orig
Then edit /etc/samba/smb.conf, erasing the contents and using this as your template:
[global]
workgroup = MYGROUP
# This must be less than 15 characters.
# Run "testparm" after editing this file to verify.
netbios name = XCP-NG-WHATEVER
security = user
map to guest = Bad User
[BulkVolume00]
comment = Bulk RAID5 Storage Volume
path = /opt/BulkVolume00
guest ok = yes
writable = yes
read only = no
browseable = yes
force user = root
create mode = 0777
directory mode = 0777
force create mode = 0777
force directory mode = 0777
Note that this is set up to be very permissive for use on a secured/home network. Setting this up for more elaborate security is beyond the scope of this tutorial.
Next let's (re)start samba, and poke the required holes in the firewall
# Verify your smb.conf is sane
testparm
# Enable and start samba services
systemctl enable smb.service
systemctl enable nmb.service
systemctl start smb.service
systemctl start nmb.service
# Edit "/etc/sysconfig/iptables" and add the following lines below the port 443 rule:
-A RH-Firewall-1-INPUT -m conntrack --ctstate NEW -m udp -p udp --dport 137 -j ACCEPT
-A RH-Firewall-1-INPUT -m conntrack --ctstate NEW -m udp -p udp --dport 138 -j ACCEPT
-A RH-Firewall-1-INPUT -m conntrack --ctstate NEW -m tcp -p tcp --dport 139 -j ACCEPT
-A RH-Firewall-1-INPUT -m conntrack --ctstate NEW -m tcp -p tcp --dport 445 -j ACCEPT
# Then restart iptables to apply
systemctl restart iptables
Finally, test this share in Windows (or your favorite SMB/CIFS client) by opening the URI to your server's IP, ex: \\192.168.1.2\BulkVolume00\
The next installment of this series will be a rough collection of tips on managing this XCP-NG RAID5 combo. Stay tuned.
You must log in to post a comment.