So this has been on my todo list for a while, but I’ve actually been able to play around with Pernix data. So what does it do ?
It gives us low latency read and writes based upon using server side flash or RAM. So think of it as a storage tier in between the virtual machine and the datastore. Or even a better way using Write Through (Read) and Write Back (Read & Write) caching. (Note when setting up write back caching you have an option to define replication partners so you don’t lose data.
So this allows for improved performance while still offloading burst IO from the underlaying datastore (which might be NFS, iSCSI, FC and so on) still its only suppoted on Vmware. The golden part (besides the flash part) is this a pure software solution, they say on their website you use about 10 min to set it up, thats just wrong I used about 7 min max! . So there are two pieces to it to get it up and running (Or 3 actually) first is the host integration software which is basically a vib that needs to be installed on each host. I basically did it using SSH and FTP, of course it is possible to use VUM as well.
esxcli software vib install -d <ZIP file name with full path> —no-sig-check
Next we need to install a management server, which needs to a Windows server with a SQL database to store data. Important to remember that Pernixdata stores about 0,5 MB data pr VM each day so size accordingly.
AFter the management server is installed you have to add the plugin to the vCenter, (I like the C# client since I was using 5.5) so when I started vCenter go into Plug-ins –> Manage Plug-ins
After the plugins were enabled and active I was able to login to the management console ( remember thou that you need an Vmware cluster for the management to work)
To give myself an easy start I want to try out the RAM based FVP cluster on one of the hosts and give it a spin.
So I created a RAM based cluster on one of the hosts (Choose Create Cluster and then add the type you resource you want flash or RAM), so you can decide yourself how much RAM you want allocated to the FVP cluster. (And no you don’t need 40 GB assigned to a FVP cluster, just about be playing around) well get to that part in a bit.
Then I choose to enable write-back (which means that the content in the FVP is not directly in sync in what’s on the datastore. Meaning that if my server happend to go down that data would be lost since it is stored in RAM, but again it does give a good write boost since the FVP doesn’t need to wait on the datastore. So I did a quick test before and after adding my virtual machine to the FVP cluster (and without any further tuning, just adding the VM to the cluster) so what is happening underneath is that Pernix becomes a part of the hypervisor kinda like a filter which the VM has to go trough when reading and writing to the datastore.
HD tune test without Pernix enabled
Then I activated the VM for write back and added it to the cluster (and yes you can do this on the fly) and note that this VM is stored on NFS storage with SAS drives.
So how where my test results now ?
Read improved with 20x (well close to it) This test does not show much write information. Lets try some random access (With FVP enabled)
(with FVP disabled)
Now we can see from the graphs inside vCenter that content is being served from the FVP content. These tests only show a fraction of it and of course would be much more visible in a production SQL or SharePoint for instance. So stay tuned for more!