ISCSI

You are currently browsing articles tagged ISCSI.

VCP – iSCSI MPIO

One of the common misconceptions about iSCSI is the difference between NIC teaming and MPIO.  MPIO stands for Multi Path Input/Output (or at least I think it does) and is a protocol designed to manage multiple links to the same SAN.  Think of it this way, if you had redundant paths between and HBA and a storage controller, the HBA would see the same LUN presented to it twice.  MPIO is a means to make sure that the host only sees one copy and make use of both redundant links.  This is far different than NIC teaming or NIC bonding.   iSCSI storage communication is generally one to one.  So even if you were using an etherchannel, the traffic would always take the same path. 

So let’s configure MPIO for our iSCSI SAN.  So Im using a LeftHand VSA as my virtual iSCSI storage.  Currently, we have a single VMkernel on each host that allows us to talk to the storage controller.  Here’s what the IP’s look like

Host1 (vmk1) – 10.10.10.2
Host2 (vmk2) – 10.10.10.3
Storage Controller – 10.10.10.15

There are a couple of items that need to be configured before we turn on MPIO.  Let’s get those out of the way.

NIC Configuration
Since the NIC’s need to multipath, they cant be part of the same team.  That is, we need to create to VMkernel ports and associate one with one NIC, and on with the other.  Since we are using distributed switches, our best bet is to create a second storage port-group and associate one vmnic(uplink) with one and one with the other.  So let’s do that first.  Login into vCenter, go to home, inventory, and networking.  Right click on your DVS and select the ‘New port Group’ option…

image 
As we did before, create a new port-group for the secondary storage vmnic…

image

Now we should have two storage port-groups…

image 
Now, let’s change the setting so that only one uplink is used for each port-group.  Right click on the first port-group and edit settings.  Select the NIC teaming setting and remove all but one of the uplinks for the Active Uplinks tab.  In my case, we have two uplinks (dvuplink1 and dvuplink2).  When you are done, you should only have one in the ‘Active Uplinks’ area as shown below…

image 
Make the same changes on the second storage port-group but leave the second uplink in the ‘Active Uplinks’ container…

image 
At this point we need to configure our secondary storage VMkernel interfaces that will make up the second path in our multipath configuration.  To do this, go back to the host configuration, select the distributed switch, and click on the ‘Manage Virtual Adapters’ link…

image
In the next window, click the ‘add’ button…

image 
Leave the default option of ‘New Virtual Adapter’ and click next…

image 
Hit next on the next screen leaving the default option (there is only one)…

image
On the next screen select your secondary port-group out of the drop down and click next…

image 
These are the IP addresses we’ll be using for the rest of the config…

Host1 (vmk1) – 10.10.10.2
Host1 (vmk2) – 10.10.10.12
Host2 (vmk1) – 10.10.10.3
Host2 (vmk2) – 10.10.10.13
Storage Controller – 10.10.10.15

So enter the appropriate IP address for your new VMkernel interface and click next…

image 
Review the final layout and click Finish…

image 
Perform the same step for your second host as well.  At this point, we should be all set to enable MPIO…

MPIO Configuration
The worst part of the config is now over.  The MPIO part is easy.  Let’s take a look at our current storage adapters…

image 
Note how we have one iSCSI device registered and it’s showing as one device with one path.  Now click the ‘Properties…’ link in the details panel and on the following windows select the ‘Network Configuration’ tab…

image 
Click the Add button…

image 
Now select the first adapter and click OK.  It should now show up in the iSCSCI initiator properties window.  Click add again and select the second adapter and click ok…

image
Both adapters should show up and the policy should show as compliant.  Click the close button to apply the changes.  It will prompt you for a rescan, click yes…

image
When the scan finished your view should now look like….

image  
Notice how we now have two paths.  If your right click on the device name, you can select ‘Manage Paths’ option…

image
This brings up the manage paths control screen…

image

This is where we can change the path selection method (default is Fixed) as well as enable and disable paths.  This is the preferred way to take a path down for maintenance….

image

Tags: , ,

LeftHand SAN

SAN-iQ_poweredI recently had a client who was looking for some of the advantages of VMWare (HA, VMotion, etc…) but didn’t have the required storage infrastructure to do so. We started pricing out SAN storage but quickly realized that the traditional FC (Fiber Channel) SANs were ,as expected, incredibly expensive. Both Dell and HP came back with numbers that were well beyond the client’s budget. During a discussion with an HP storage specialist the “LeftHand” name came up. I had heard of ISCSI in the past in regards to Dell’s equalogic SANs but had never implemented one. Needless to say we pursued the option and got a LeftHand SAN specialist to come in and talk to us about their appliances. I have to say, I was very impressed.   I signed up for the HP Left Hand Academy Technical Training.  I felt like the course was a good overview of the appliances and if LeftHand is something you might be interested in I strongly suggest taking it.  The class number was HH670P.

Here are some of the notes from our LeftHand Training

LeftHand vs. Traditional SAN
LeftHand is a truly virtual SAN implementation.  On other SANs I had worked with you can literally log into the controller, pick which physical disks you wanted in a volume from each disk cabinet, and then provision the RAID.  LeftHand sees all of its storage as one big pool.  As you add more nodes onto the cluster the amount of storage you have increases but it’s still all one big pool.  All the data is striped across all of the nodes in the cluster.  With Network RAID you can lose an entire node in a Cluster and not even know.  Bottom line is LeftHand isn’t traditional SAN.

Licensing
The really, really, really nice part about LeftHand is that it’s an all inclusive license, meaning that you get all of the features for a flat fee. Everything is included, no extra license for Snapshots or SAN replication are needed which makes the package even more appealing.

5 Main points
-Storage clustering
Physical appliances are seen as clusters.  Clusters have a single VIP (Virtual IP) that is used as the ISCSI Target address. You can start with one appliance and as your storage needs increase, simply add more appliances to the cluster, increasing your SAN storage.

-Network RAID
LeftHand uses what they call Network RAID to ensure up time in the case of appliance / hardware failure.  The data is striped across all of the nodes in a cluster.  You can configure your cluster for different levels of replication which is what LeftHand calls Network RAID.  For instance if you have two physical appliances you can configure 2 way replication.  In 2 way replication an exact copy of all of your data from one node would be on the second node.  In turn this cuts your usable space in half since a 1 to 1 replication of your data is taking place.  On the other hand 2 way replication, when you have 3 or 4 physical nodes, sounds very appealing.  Your data is in two places and you can very easily lose an entire physical node and the cluster would still be running.   Additionally as you add more than 2 nodes you can configure 3 or 4 way replication spreading your copies of your data across more physical nodes.  (Side note: The LeftHand appliances use RAID-5 in the physical nodes for local disk)

-Thin Provisioning
If you aren’t familiar with thin provisioning you should be.  It’s becoming a very common word in both the storage and the VMWare world.  Both LeftHand SAN and VSphere support thin provisioning.  Thin provisioning allows you to use storage on an “as you use it” basis.  When we used to provision disks we had to fully provision them meaning that once I clicked the commit button and created the disk in the storage manager that disk was gone out of my available pool.  So if a DBA requested a 1 terabyte disk for one of his DB servers I had to fully provision the disk initially even if they weren’t planning on filling up that terabyte until 5 years down the road.  With thin provisioning I tell the SAN that I want a 1 terabyte disk and it presents a 1 terabyte disk to the OS but it doesn’t actually use the space until it needs to.  The SAN will just use space as it needs it.  The downside to this of course is that since you can overprovision your SAN you can run into a situation where a thin provisioned volume tries to use more space and there just isn’t any there.  That = BAD

-Snapshots
Like any other SAN (or any other good one) you can snapshot.  The all-inclusive license is a big plus here.  Additionally you can do some cool stuff with snapshots and backups.  For instance if you have an NTFS volume you can back up the snapshot.  Using the LeftHand CLI you can snap a copy of the volume, use Windows built in ISCSI initiator on your backup server to mount the snapshot, backup the snapped copy, dismount the volume, and finally remove the snapshot from the SAN.  Of course if you are doing VMWare you need something that could read VMFS.

-Remote copy
I won’t get too much into this since it sort of speaks for itself but you can use Remote Copy to asynchronously copy your data to another LeftHand appliance for DR purposes.  There are a ton of options here (scheduled, not scheduled, throttling, etc…) so it’s worth looking into if you are doing straight backups to SAN at a remote DR site. 

The VSA
I’m not going to spend a lot of time talking about this because I plan on having a later post the describes the VSA configuration.  However, it is worth noting that LeftHand is the only SAN provider that I know of (save EMC I believe) that has a Virtual SAN Appliance.  Have old unused servers at your collo?  If they run ESX, load the VSA and use them with Remote Copy to backup your data.  The instructor at the class told me he thought the VSA was about 85% as fast as the physical appliance dependant on the fact that the hardware  it ran on was up to spec.  More to come on the VSA!

Comments (Random Notes)
-Dual Gigabit NIC’s, can handle the 10 Gig NIC cards if you have the infrastructure
-No Management port.  Only in band management on the ISCSI network
-Every appliance is its own controller.  You no longer have controllers and then tack on disk drawers
-Managed through the LeftHand Management console
-Can swap between full and thin volume provisioning at any time
-The appliances use Managers on each node to form cluster quorum.  No quorum = No Cluster

Tags: , ,