I recently had a client who was looking for some of the advantages of VMWare (HA, VMotion, etc…) but didn’t have the required storage infrastructure to do so. We started pricing out SAN storage but quickly realized that the traditional FC (Fiber Channel) SANs were ,as expected, incredibly expensive. Both Dell and HP came back with numbers that were well beyond the client’s budget. During a discussion with an HP storage specialist the “LeftHand” name came up. I had heard of ISCSI in the past in regards to Dell’s equalogic SANs but had never implemented one. Needless to say we pursued the option and got a LeftHand SAN specialist to come in and talk to us about their appliances. I have to say, I was very impressed. I signed up for the HP Left Hand Academy Technical Training. I felt like the course was a good overview of the appliances and if LeftHand is something you might be interested in I strongly suggest taking it. The class number was HH670P.
Here are some of the notes from our LeftHand Training
LeftHand vs. Traditional SAN
LeftHand is a truly virtual SAN implementation. On other SANs I had worked with you can literally log into the controller, pick which physical disks you wanted in a volume from each disk cabinet, and then provision the RAID. LeftHand sees all of its storage as one big pool. As you add more nodes onto the cluster the amount of storage you have increases but it’s still all one big pool. All the data is striped across all of the nodes in the cluster. With Network RAID you can lose an entire node in a Cluster and not even know. Bottom line is LeftHand isn’t traditional SAN.
The really, really, really nice part about LeftHand is that it’s an all inclusive license, meaning that you get all of the features for a flat fee. Everything is included, no extra license for Snapshots or SAN replication are needed which makes the package even more appealing.
5 Main points
Physical appliances are seen as clusters. Clusters have a single VIP (Virtual IP) that is used as the ISCSI Target address. You can start with one appliance and as your storage needs increase, simply add more appliances to the cluster, increasing your SAN storage.
LeftHand uses what they call Network RAID to ensure up time in the case of appliance / hardware failure. The data is striped across all of the nodes in a cluster. You can configure your cluster for different levels of replication which is what LeftHand calls Network RAID. For instance if you have two physical appliances you can configure 2 way replication. In 2 way replication an exact copy of all of your data from one node would be on the second node. In turn this cuts your usable space in half since a 1 to 1 replication of your data is taking place. On the other hand 2 way replication, when you have 3 or 4 physical nodes, sounds very appealing. Your data is in two places and you can very easily lose an entire physical node and the cluster would still be running. Additionally as you add more than 2 nodes you can configure 3 or 4 way replication spreading your copies of your data across more physical nodes. (Side note: The LeftHand appliances use RAID-5 in the physical nodes for local disk)
If you aren’t familiar with thin provisioning you should be. It’s becoming a very common word in both the storage and the VMWare world. Both LeftHand SAN and VSphere support thin provisioning. Thin provisioning allows you to use storage on an “as you use it” basis. When we used to provision disks we had to fully provision them meaning that once I clicked the commit button and created the disk in the storage manager that disk was gone out of my available pool. So if a DBA requested a 1 terabyte disk for one of his DB servers I had to fully provision the disk initially even if they weren’t planning on filling up that terabyte until 5 years down the road. With thin provisioning I tell the SAN that I want a 1 terabyte disk and it presents a 1 terabyte disk to the OS but it doesn’t actually use the space until it needs to. The SAN will just use space as it needs it. The downside to this of course is that since you can overprovision your SAN you can run into a situation where a thin provisioned volume tries to use more space and there just isn’t any there. That = BAD
Like any other SAN (or any other good one) you can snapshot. The all-inclusive license is a big plus here. Additionally you can do some cool stuff with snapshots and backups. For instance if you have an NTFS volume you can back up the snapshot. Using the LeftHand CLI you can snap a copy of the volume, use Windows built in ISCSI initiator on your backup server to mount the snapshot, backup the snapped copy, dismount the volume, and finally remove the snapshot from the SAN. Of course if you are doing VMWare you need something that could read VMFS.
I won’t get too much into this since it sort of speaks for itself but you can use Remote Copy to asynchronously copy your data to another LeftHand appliance for DR purposes. There are a ton of options here (scheduled, not scheduled, throttling, etc…) so it’s worth looking into if you are doing straight backups to SAN at a remote DR site.
I’m not going to spend a lot of time talking about this because I plan on having a later post the describes the VSA configuration. However, it is worth noting that LeftHand is the only SAN provider that I know of (save EMC I believe) that has a Virtual SAN Appliance. Have old unused servers at your collo? If they run ESX, load the VSA and use them with Remote Copy to backup your data. The instructor at the class told me he thought the VSA was about 85% as fast as the physical appliance dependant on the fact that the hardware it ran on was up to spec. More to come on the VSA!
Comments (Random Notes)
-Dual Gigabit NIC’s, can handle the 10 Gig NIC cards if you have the infrastructure
-No Management port. Only in band management on the ISCSI network
-Every appliance is its own controller. You no longer have controllers and then tack on disk drawers
-Managed through the LeftHand Management console
-Can swap between full and thin volume provisioning at any time
-The appliances use Managers on each node to form cluster quorum. No quorum = No Cluster