In this part we will complete ScaleIO installationin VMware ESXi environment. This is the second part of ScaleIO installation. Part 1 can be found here: http://www.vmdaemon.com/scaleio-installation-vmware-esxi-5-5-part-1/
So in part 1 we configured MDM cluster, added the license and provisioned new storage to ScaleIO VMs from the local disks in each esxi server.In this part we will add SDSs, map volumes to SDCs and finally map these volumes to our iscsi initiators.
We should now define the SDSs that will be used in our environment. In my case, I will use the same 3 ScaleIO VMs I used for MDM cluster setup. The below commands were run from the primary node.
These commands to be run from primary node:
scli --add_sds --sds_ip --protection_domain_name <protection_domain> --storage_pool_name SP1,SP2 --device_name /dev/sdb,/dev/sdc --sds_name sds_1 --force_clean
#Output should be: Successfully created SDS sds_1. Object ID <Object_ID>
scli --add_sds --sds_ip --protection_domain_name <protection_domain> --storage_pool_name SP1,SP2 --device_name /dev/sdb,/dev/sdc --sds_name sds_2 --force_clean
#Output should be: Successfully created SDS sds_2. Object ID <Object_ID>
scli --add_sds --sds_ip --protection_domain_name <protection_domain> --storage_pool_name SP1,SP2 --device_name /dev/sdb,/dev/sdc --sds_name sds_3 --force_clean
#Output should be: Successfully created SDS sds_3. Object ID <Object_ID>
scli --query_storage_pool --protection_domain_name production --storage_pool_name HDD --show_volumes 1
The output will include total capacity, unused capacity and spare capacity as shown below:
So from the above we can see that the unused capacity is 669.2 GB while the Spare capacity is 74.6 (10% of the total capacity).
This is the default Spare Capacity for ScaleIO however, the recommended value should be at least 1/n % where n is the number of SDS servers (assuming that the local disks size isthe same among the configured SDSs) So in my case, since I have 3 SDS servers configured. I should modify the policy to be 33% for each of configured storage pools instead of 10 10% as shown below:
The coolest thing about ScaleIO is the ultimate data availability, So we talked about Spare capacity and how it should be configured. Now, The unused capacity will be used to create a MIRRORed volume so you should have only half of this capacity eligible for volume creation.
Let’s take SSD storage pool as in example here the unused capacity is 296.4. Since it is mirrored then the actual capacity usage will be 148 GB. Taking into consideration the the basic allocation unit is 8 GB so the new size will be rounded up to multiple of 8 so trying to create a volume of 148 Gb will actually create volume of 152 GB hence the command will fail. The maximum size can be created in this case will be 144 GB (The screenshot below shows the rounded up volume):
The below screenshot is showing how to map the volume either to specific SDC or by using parameter –all_sdcs for all SDCs returned from –query_all_sdc output.
The last step is to add iscsi initiators and map the volumes to scsi_initiator. In my case this will be the ESXi hosts I have in my environment. You can use either iscsi iqn , initiator_name or initiator_number to add your initiators . Since I do not have any initiators yet configured in my ScaleIO setup so I will use iqn.
For ESXi to get the iscsi iqn, you can do that either via vSphere client, webclient or cli.
For cli, run this command: esxcli iscsi adapter list while for GUI, you should check the iscsi software adapter properties as shown below:
To add the iscsi initiators:
Now, let us map our volumes to the esxi initiators:
We are almost done here. The last step is to use the SDCs as iscsi targets for our ESXi hosts . You should point to the target of your SDC which is local to your ESXi host.
So for my esx1, it is hosting MDM Primary node so my targets are MDM primary node and MDM secondary node
For esx2, my targets are MDM secondary node and Tie-Breaker node
For esx3, my targets are Tie-Breaker node and MDM Primary node
At this stage, we should rescan the storage adapters to add the new ScaleIO shared devices:
The last step is to create the datastore and mount the devices to be ready for use.
You should be able now to launch the dashboard located at /opt/scaleio/ecs/mdm/bin/dashboard.jar
Open dashboard.jar and connect to the Virtual IP address as shown below:
ScaleIO Recommendations and Best practices:
- You should use at least 2 NICs for redunancy. 10Gig NICs are recommended for better performance especially when with SSDs.
- ScaleIO keeps 2 copies of all data (mirrored) however, in case of a failure , ScaleIO will rebuild the data in order to be in redundant state and it will use spare capacity for that so it is really important to follow the best practices when configuring the spare capacity (1/n)% where n is the no of SDSs.
- ScaleIO VMs presented disks should be formatted as Thick, eager zeroing.
- Running the scli commands requires the MDM package to be installed. When you run scli commands, no need to add –mdm_ip at the in of each command , however running this from another nod elike Secondary node will need the switch –mdm_ip in all scli commands.
- The unused space after creating the volumes can be used for snapshots as snapshots use thinly provisioned disks so we have ZERO % waste of capacity.
- ScaleIO disks supports 2 VAAI primitives (ATS and Zero Blocks/Write Same) as shown below:
- The default and recommended NMP path selection policy PSP for ScaleIO disks is Fixed so you should point the active path to the SDC node which is local to the perspective ESXi host