Consider the following scenario. Say we have a VM that requires access to a rather large disk (greater than 1 Terabyte ). Further consider that using a local datastore is out of the question – presumably becuase the datastore is a rather expensive SSD and space is somewhat of a scarce resource. RDM to the rescue!
Raw Device Mapping RDM is a method to provide a Virtual Machine with direct access to a local SATA drive connected to a SATA controller. In this blog post I’ll show you how to use RDM and configure unRAID to mount that drive on boot.
In the enterprise space, there’s definitely a strong, albeit technical reason why an admin would choose Raw Device Mapping (RDM) over VMware’s Virtual Machine File System (VMFS), you can look here if you care to read up on it. Moreover, I won’t delve into the benefits of RDM, I’ll assume the reader knows what it is and is ready and willing to implement RDM on their server. The majority of the work done will require SSH access to the ESXi server and consequently the unRAID server. I’ll also assume the reader is comfortable on the command line.
ESXi Side of Things
Step 1 – Find Your Disk
The first step is determining the disk you’ll want to use for the RDM (presumably the disk has been physically installed in the server). To do that, log in with SSH client and issue the following command:
~ # fdisk -l
Just as an example, I will use an SSD to illustrate the procedure (Brand: Kingston, Model: SV300S37A120G, Serial Number: obfuscated). I normally wouldn’t do this with an SSD but the procedure is no different with a normal mechanical hard drive. Take note of the model/serial number because if you have multiple drives of the same type, it could get confusing. All right, so I see the disk I’m after.
Step 2 – Find the VML Identifier
The next command we need to run is used to determine the VML identifier of the disk. It’s important as it will be used in Step 3 to create the RDM.
~ # ls -al /dev/disks/
So at this point, I was able to locate the VML identifier. Copy it to an empty file.
If your disk has partitions, you will see VML identifiers corresponding to the number of partitions, i.e.: “:1, :2” and so on. In this case, the disk has one partition. And because we are interested in only the disk proper not partitions, we choose the VML identifier without the partition at the end of the identifier.
Step 3 – Create the RDM
An existing VMFS datastore is required as a pointer VMDK file will be created for the RDM disk and this has to be stored on a VMFS datastore. I’ve created a directory call RDMS in my local datastore.
~ # cd /vmfs/volumes/local_ssd_datastore1/ ~ # mkdir RDMS ~ # cd RDMS
Now, to create the RDM you’ll use the command vmkfstools. You’ll need to substitute your VML indentifier and the name of the RDM. In the example I chose to use the drive model number (KINGSTON_SV300S37A120G) as the name. Be sure to be in the RDMS directory prior to running the command.
~ # vmkfstools -r /vmfs/devices/disks/vml.010000000035303032364237373341303332323731202020204b494e475354 KINGSTON_SV300S37A120G.vmdk -a lsilogic
Good! Half done.
unRAID Side of Things
Step 1 – Add you RDM Disk to unRAID
You will need to shutdown your unRAID VM before adding a new disk. The process is rather straight forward. Edit the VM settings, and using the Add… button add a new Hard Disk. When it comes time to select a disk, choose Use an existing virtual disk and browse to the RDM we created above.
Step 2 – Change SCSI Controller Type
For some reason I had to change the SCSI Controller Type to LSI Logic SAS in order to get unRAID to recognize the new RDM hard disk. This is quite easy to do, edit the VM settings, click SCSI Controller 0 and hit the Change Type… button.
Step 3 – Mount Disk on unRAID Boot
You can either add the disk to your existing array or add it as a cache disk. I will instead elect to mount it manually on boot. Let’s locate the disk, again with the fdisk command.
root@Tower:~# fdisk -l
In my case, I see the disk as /dev/sda. If you haven’t done so already use fdisk to create a new primary partition.
root@Tower:~# fdisk /dev/sda
If you don’t know how to use fdisk, see here. Then, format the newly created partition. I used the ext4 filesystem but you could use the reiser filesystem as well, just subsitute mkfs.ext4 for mkreiserfs.
root@Tower:~# mkfs.ext4 /dev/sda1
Start up parameters in unRAID are entered in the go file located here:
First, let’s see how we can reference the raw disk. I typically tend to poke around in the following directories:
root@Tower:~# ls -al /dev/disk/by-id/ | grep sda root@Tower:~# ls -al /dev/disk/by-uuid | grep sda
So it turns out that I was to able to spot a reference for the disk in /dev/disk/by-uuid/.
root@Tower:~# ls -al /dev/disk/by-uuid | grep sda lrwxrwxrwx 1 root root 10 2014-11-25 21:40 83a79385-55f0-452c-bb5b-b223f979ba0f -> ../../sda1
I then add the following to the go file with the vi editor. BTW, here’s a good primer on vi the editor. The mount command (line 2) will mount the RDM disk on the mount point /mnt/disk/downloads created on line 1. Change it to suit your needs.
mkdir -p /mnt/disk/downloads mount -t ext4 /dev/disk/by-uuid/83a79385-55f0-452c-bb5b-b223f979ba0f /mnt/disk/downloads
And finally if you want to share the drive on your network via SMB/CIFS you can simply add the snippet below in the following file:
[downloads] path = /mnt/disk/downloads # !!! change_this_if_necessary !!! comment = Download Drive # !!! change_this_if_necessary !!! browseable = yes # Secure public = yes writeable = no write list = users # !!! change_this_for_your_users !!!
Reboot and hopefully it all works on boot. If it does, pat yourself on your back. Congrats! You did it. Bask in the knowledge that you accomplished something really cool though don’t bother telling your wife – she probably doesn’t really care ;-(