VSAN Design and Sizing

High Level Considerations

  1. Hardware: Ways to build VSAN cluster
    1. VxRail : HCI Appliance
    2. VSAN Ready Nodes: Certified Hardware Form Factor ( Recommended )
    3. Build your own based on certified components : Follow VMWare Compatibility Guide
      1. IO Controller
      2. HDD
      3. SDD
      4. Driver version
      5. Firmware version
  2. Software:
    1. Run latest version of ESXi and vCenter. ( VMWare continuously fixes issues encountered by customers)
  3. Balanced Configurations:
    1. Identical configuration across all cluster members.
    2. Unbalanced configuration ( e.g. Some hosts not contribute storage to vsanDatastore, different IO Controllers, Disks)
      1. Additional support if problem is encountered
  4. Design for growth
    1. Supports both Scale up and Scale out: Scale in a way there is adequate amount of capacity and cache for workloads
    2. Design for growth
      1. Additional disk slots
      2. Oversize cache devices up front
  5. Sizing : Capacity Maintenance and Availability
    1. If there is a Failure (Device/Host) or Host in maintenance: VSAN attempts to rebuild components from failed device/host on the remaining cluster.
      1. 2-Node Cluster with Witness appliance: Rebuilding not possible; No Spare capacity available.
      2. 3-Node Cluster: Rebuilding not possible; No Spare capacity available
      3. 4-Node Cluster: Rebuilding is possible
    2. Number of failures to Tolerate
      1. Consider space required for mirror copies

All Flash vs Hybrid

  1. Cache size: 10% of anticipated consumed capacity (capacity before considering FTT) recommended for both AFD and Hybrid.
    1. How Cache is used
      1. Hybrid: 70% for Reads, 30% for Writes
      2. AFD: 100% for Writes.
  2. All Flash Considerations
    1. 1 Gb network not supported. Required 10 Gb
    2. Flash read cache reservation is not used with all flash configurations
    3. Need to mark a flash so that it can be used for capacity
    4. Endurance becomes important consideration for both cache and capacity layers.

VSAN Limits

  1. ESXi Hosts
    1. Minimum:
      1. 2 ESXi hosts with witness appliance / 3 ESXi Hosts: In event of a failure, VSAN cannot rebuild components on another host
      2. 4+ ESXi Hosts: Recommended. VSAN can rebuild components on another host
    2. Maximum: 64 hosts. ( To run 64 nodes certain advanced settings must be set)
  2. VMs
    1. Maximum VMs allowed: 200 VMs per ESXi & 6400 VMs per cluster ( VSAN 6.0)
    2. Maximum VMs protected by vSphere HA: 6400 VMs (VSAN 6.0). Earlier versions had limit of 2048 ( per Datastore limit)
  3. Disks and Disk groups
    1. Disk group: Cache 1-1, Capacity 1-7
    2. Host: 1-5 Disk groups
    3. Caution: VSAN does not support mixing of all flash disk groups and hybrid disk groups in a cluster.
  4. Components:
    1. Each stripe of an object is a component.
    2. Maximum per host: 9000
    3. Stretched cluster: 45000 components
    4. Largest component size: 255 GB
  5. Storage Policy
    1. Stripe Width per Object: 1-12.
    2. NumberOfFailuresToTolerate: 1-3. To accommodate ‘n’ failures there needs to be ‘2n+1’ fault domains/hosts in the cluster.
    3. FlashReadCacheReservation: Maximum 100%
      1. Not applicable on AFD
    4. ObjectSpaceReservation: 100 %
    5. IOPLimitPerObject: 2147483647
  6. VMDK Size
    1. VSAN 6.0: 62TB ( Component size: 255GB)
  7. Design Considerations
    1. Consider enabling vSphere HA
    2. Ensure there are enough devices in capacity layer to accommodate a desired stripe width requirement
    3. Ensure there are enough hosts / fault domains to support FTT
    4. Consider component count
    5. Keep in mind that VMDKs, even 62TB VMDKs, will initially be thinly provisioned by default, so be prepared for future growth in capacity.

Network Design Considerations

  1. Network Interconnect
    1. Hybrid:
      1. 1Gbps – Dedicated VSAN traffic
      2. 10 Gbps – If Shared with other types of traffic, use NIOC; Ensures no other traffic will impact vSAN traffic.
    2. All Flash :
      1. 10 Gbps – If Shared with other types of traffic, use NIOC; Ensures no other traffic will impact vSAN traffic.
    3. Other considerations
      1. Replication and Communication traffic between ESXi hosts
      2. # of Replicas per VM
      3. I/O intensive applications running in VM
  2. NIC teaming: Recommended Active/Standby
  3. Jumbo Frames
    1. If Jumbo frames are already enabled in the network infrastructure: Recommended
    2. If not enabled: Not Recommended (Operational cost outweigh limited CPU and Performance benefits)
  4. Multicast / Unicast – depends on VSAN version
    1. Prior to VSAN 6.6: Multicast is required.
    2. VSAN 6.6 : VSAN traffic uses Unicast
  5. vSAN Network Design Guide

Storage Design Considerations

  1. Flash Devices
    1. Client Cache
      1. Relevant on AFD & Hybrid Configurations
      2. Accelerates Performance.
      3. Leverages DRAM memory local to VM. The amount of RAM allocated it 4% ( Up to 1GB)
      4. Complementary to CBRC. CBRC is limited to read only replica. Client Cache enables caching of read only replica and VMDKs as well.
      5. CBRC allow common cached blocks to be served up to the virtual desktops in terms of microseconds, instead of millisecond
    2. Read Cache
      1. Relevant on Hybrid Configurations
      2. VSAN divides up the caching of data blocks evenly between the replica copies.
      3. Read operation.
        1. If the block being read from first replica is not in cache, the directory service is referenced to find if the block is in another cache
        2. If found, data is retrieved. If not found -> Read Miss; fetch from magnetic disc.
    3. Write Cache
      1. Relevant on both all flash and Hybrid configuration.
      2. Once a write is initiated by the application running inside of the Guest OS, the write is duplicated to the write cache on the hosts which contain replica copies of the storage objects.
  2. Flash Endurance Considerations
    1. Endurance specification to be used: Terrabytes Written (TBW)
    2. For both cache and capacity devices
  3. Flash Cache Sizing
    1. Hybrid Configuration
      1. General Recommendation: 10% of the expected consumed storage capacity (for all VMs) before NFTT is considered.
        1. Note: If VM size is 100GB and expected usage is 20GB then 20GB is the expected consumed storage for the VM.
      2. If VM snapshots are used heavily increase Cache: Capacity ratio to 15%
      3. The objective is to keep the Active Working Set in cache as much as possible for best performance.
      4. FlashReadCacheReservation policy setting is only relevant on hybrid clusters
      5. Design for growth: Consider designing with a larger cache configuration that will allow for seamless future capacity growth
    2. All Flash Configurations
      1. Prior to 6.5: 10% of the expected consumed storage capacity
      2. From 6.5: Performance based. Link: https://blogs.vmware.com/virtualblocks/2017/01/18/designing-vsan-disk-groups-cache-ratio-revisited/
      3. Best practice: Check the VCG and ensure that the flash devices
        1. Supported
        2. Provides the endurance characteristics that are required for the vSAN design.
  4. Capacity Sizing Considerations
    1. Common Considerations for both Hybrid & All Flash
      1. Number of VMs
      2. Number of snapshots taken concurrently and Snapshot size
        1. Would snapshots capture VM Memory also? If yes, consider the space.
      3. Number of replica copies that will be created; NumberOfFailuresToTolerate
      4. Thin Provisioning Over Commitment ( Object Space Reservation)
    2. Consideration Specific to All Flash
      1. Endurance and Performance becomes a consideration for capacity layer in all flash configuration
  5. Capacity Sizing
    1. NumberOfFailuresToTolerate: NFTT = Number of replicas created.
    2. Formatting Overhead; all disks in disk groups are formatted with on-disk file system. Formatting consumes some space
      1. V1: 750MB per disk
      2. V2: 1% of physical disk capacity
      3. V3: 1% of physical disk capacity + deduplication metadata.
    3. Checksum Overhead; 5 Bytes for every 4KB data
      1. Without deduplication: 0.12% of raw capacity
      2. With deduplication: 1.2%
    4. Recommended free capacity (Slack Space): 30%. Design to Avoid Running out of Capacity
      1. Capacity for failure: VSAN attempts to rebuild the missing/failed components on the remaining capacity in the cluster. Capacity device/ Cache device fails.
        1. Capacity Device Fails: Components get rebuild on same disk group or different disk group.
        2. Cache Device Fails: Components get rebuild
      2. VSAN begins automatic rebalancing when a disk reaches 80% full threshold.
    5. Negligible Capacity Overheads
      1. Component Overhead: Every component created consumes space for metadata
        1. VSAN 5.5: 2MB for component
        2. VSAN 6.0 v2: 4MB for component
      2. Witness Overhead: A witness is created for every component. Witness consumes 2MB of space (for metadata) on vSAN Datastore.
  6. Scale Up Capacity
    1. Maintain required Cache: Capacity Ratio. Provide higher Cache: Capacity ratio initially
    2. New Disk Group: Scale up both Cache and Capacity
  7. Disk Groups
    1. Disk group assigns a cache device to provide cache for a group of capacity devices.
    2. If desired cache: capacity ratio is high -> multiple disk groups must be created because there can be only cache device per disk group.
    3. Large disk group vs small disk groups
      1. Large disk group -> less cache: capacity ratio. Less cost
      2. Small disk groups -> more cache: capacity ratio. High cost.
    4. Designing Disk group
      1. Diskgroup =~ Storage failure domain
      2. Large Disk Groups vs Small Disk Groups: Recommended Multiple Small Disk groups.
        1. Large Disk Groups: When there is a failure, the length of time to rebuild components will be more.
        2. Small Disk Groups: Requires more flash devices, IO controllers, disk slots.
  8. Drive Capacity, Component Size and VMDK Size
    1. Maximum component size on VSAN is 256 GB.
    2. A large VMDK object may be split into multiple components across multiple disks to accommodate large VMDK size. However when vSAN splits an object in this way, multiple components may reside on the same physical disk, a configuration that is not allowed when NumberOfDiskStripesPerObject is specified in the policy.
    3. Although vSAN might have the aggregate space available on the cluster to accommodate the large size VMDK object, it will depend on where this space is available. For example, in a 3 node cluster which has 200TB of free space, one could conceivably believe that this should accommodate a VMDK with 62TB that has a NumberOfFailuresToTolerate=1 (2 x 62TB = 124TB). However if one host has 100TB free, host two has 50TB free and host three has 50TB free, then this vSAN will not be able to accommodate this request.
  9. PCIe Flash devices vs SSDs vs NVMe
    1. Performance – Bandwidth
      1. SATA: 6Gbps ( Most SSDs use SATA interface)
      2. PCIe 3.x: 32 Gbps
    2. Performance – IOPS
    3. Cost
    4. Capacity
      1. SSD : Largest 4000GB
      2. PCIe Flash: 6400 GB
    5. Considerations: If workload required PCIe performance or SSD would be sufficient
  10. Magnetic Disks
    1. Factors to Consider
      1. Capacity
      2. Stripe Width
      3. FTT
    2. Supported Types
      1. SATA ( Capacity Centric Environments where performance is not a priority)
      2. NL-SAS
      3. SAS
    3. Capacity
      1. SATA : 4TB
      2. SAS: 1.2 TB
    4. Performance: RPM
      1. SAS – 15K RPM, NLSAS – 7200 RPM & SATA – 5400 RPM, 7200 RPM
      2. Cache friendly workloads are less sensitive to disk performance than cache unfriendly workloads
      3. Good practice: Be Conservative. ( Application performance profiles may change over time. 10K RPM are generally accepted drives)
    5. Number of Disks: Having smaller magnetic disks will often give good performance than larger ones.
    6. Uniform disk model across all nodes in cluster. Do not mix drive models/types.
  11. Storage I/O Controller
    1. Ensure the components are in VCG
    2. Single Controller vs Multiple Controller:
      1. # Of Disks per hosts & Ports supported by a Controller.
      2. Multiple IO controllers can reduce the failure domain. (blast radius)
    3. SAS Expanders:
      1. VMware has not extensively tested SAS expanders with vSAN, and thus does not encourage their use.
      2. SAS Expanders have been tested in limited cases with Ready Nodes on a case by case. Check VCG.
    4. Queue Depth
      1. Minimum recommended Controller queue depth : 256
      2. Recommended: Larger queue depth possible.
    5. RAID 0 vs Pass-through: Recommended Pass-through.
      1. Pass-through means that this controller can work in a mode that will present the magnetic disks directly to the ESXi host.
      2. RAID 0 implies that each of the magnetic disks will have to be configured as a RAID 0 volume before the ESXi host can see them.
      3. Recommended: Pass-through. RAID-0 mode typically take longer to install and replace than pass-thru drives from an operations perspective
    6. Disable Cache on Controller, if possible.
    7. Advanced controlled features. Recommended to disable advanced features for acceleration in VSAN environment.

VM Storage Policy Design Considerations

  1. Storage Policy Design Decisions
    1. Number of Disk Stripes per Object / Strip Width
      1. Defines minimum number of capacity devices across which each replica of a storage object is distributed.
      2. Improves VM Performance?: Yes and No; depends on application and the device
        1. Yes: If VMs are I/O sensitive and capacity devices ( VM data is distributed on) are not busy
        2. No: If VMs are I/O sensitive or capacity devices ( VM data is distributed on) are busy
      3. Strip width sizing considerations
        1. Capacity devices: Are there enough devices in various hosts across the cluster to accommodate stripe width?
        2. Host Component Limit: Would Stripe width require significant number of components and impact/consume host component count?
    2. Flash Read Cache Reservation(FRC) ( Relevant only in Hybrid Configurations)
      1. Recommendation: Default Value is 0%, don’t change unless a specific performance issue is observed.
      2. Sizing Considerations
        1. Flash Read Cache Reservation can easily exhaust Read Cache, especially if thin provisioning is used.
    3. Number of Failures to Tolerate (FTT)
      1. For “n” failures tolerated, “n+1” copies of the object are created and “2n+1” hosts contributing storage are required
      2. Limits :
        1. Default: 1.
        2. Maximum: 3 ( if vmdk < 16tb), 1 ( vmdk > 16tb)
      3. Sizing Consideration
        1. Mirror copies consume space.
    4. Fault Tolerance Method: RAID1, RAID5/6 (Erasure Coding)
      1. Erasure coding
        1. Provides significant capacity savings. Incurs additional overhead
        2. Available only in All Flash Configurations
    5. Force Provisioning
      1. Allows violation of FTT, Stripe Width and FRCR during initial deployment of a VM.
      2. Points to Consider
        1. Adding Resource to VSAN: Once additional resources become available in the cluster, vSAN may immediately consume these resources to try to satisfy the policy settings of virtual machines
        2. Data Migration: If an object is non-compliant then “Full Data Evacuation” of such an object behaves like “Ensure Accessibility”
    6. Object Space Reservation: Default 0% , Maximum 100%
    7. IOP Limit per Object
      1. Prevents noisy neighbors
      2. Creates artificial standards of service as part of tired service offering using the same pool of resources.
    8. Object Checksum
      1. Enabled by default. Carries overhead of small disk I/O, CPU and Memory
      2. Can be disabled using DisableObjectChecksum
  2. VM Name Space (VM Home Object) and Swap Considerations: They don’t inherit all settings from storage policy
    1. VM Name Space
      1. Number of Disk Stripes Per Object: 1
      2. Flash Read Cache Reservation: 0%
      3. Number of Failures To Tolerate: (inherited from policy)
      4. Force Provisioning: (inherited from policy)
      5. Object Space Reservation: 0% (thin)
    2. VM Swap Object: It is not visible in UI , use RVC Commands
      1. Number of Disk Stripes Per Object: 1 (i.e. no striping)
      2. Flash Read Cache Reservation: 0%
      3. Number of Failures To Tolerate: 1
      4. Force Provisioning: Enabled ( To disable use the setting SwapThickProvisionDisabled)
      5. Object Space Reservation: 100% (thick).
  3. Snapshot Delta Disks
    1. Snapshot disks inherit the policy settings of VMDK
    2. Not visible in UI
  4. Snapshot Memory
    1. VSAN 5.5: Max 256 GB; Memory Snapshots are stored in VM Namespace and Maximum Namespace size is 256 GB
    2. VSAN 6.0 : No limits; Memory Snapshots are instantiated as objects
  5. Changing a VM Storage Policy Dynamically
    1. Changing policies dynamically may lead to a temporary increase in the amount of space consumed on the vSAN Datastore
      1. FTT Increased: New Replicas are created in addition to the existing Replicas.
      2. Stripe Width Increased: Existing Replicas cannot be used. Creates brand new replicas
  6. Provisioning with a policy that cannot be implemented
    1. vSAN does not consolidate current configurations to accommodate newly deployed virtual machines
    2. Example: vSAN will not move components around hosts or disks groups to allow for the provisioning of a new replica, even though this might free enough space to allow the new virtual machine to be provisioned.
  7. Provisioning with default policy: VMDK Thick provision / Thin Provision
    1. VSAN 5.5 : If no policy is selected while provisioning , Default policy uses Thick Provisioning
    2. VSAN 6.0: Default storage policy has all the capabilities

Host Design Considerations

  1. CPU
    1. Sockets/Host, Core/Socket & vCPU/Core
    2. # of VMs & vCPUs/VM
    3. CPU Overhead for VSAN: 10%
  2. Memory
    1. Desired Memory for VMs
    2. A minimum of 32GB is required per ESXi host for full VSAN functionality
  3. VSAN Host Storage
    1. VM Storage
      1. VMDKs : Storage required ( # of VMs, VMDKs size required for each VM)
      2. Memory Consumed by VMs ( .vswp )
      3. Snapshots
        1. # of Snapshots per VM
        2. how long they are maintained
        3. Estimated space consumption for each snapshot
    2. FTT
  4. Boot Device Considerations
    1. Supported Devices
      1. VSAN 5.5: USB & SD
      2. VSAN 6.0: USB, SD and SATADOM
    2. USB & SD : Logs and traces reside in RAM
      1. Redirect logs to persistent storage (not vsanDatastore)
      2. VMWare does not recommend storing logs and traces in VSAN Datasore
    3. SATADOM : Traces reside in the SATADOM device
      1. Use SLC class device for performance and endurance.
  5. Compute Only Hosts: Not Recommended; Use balanced configurations
  6. Maintenance Mode Considerations
    1. FTT: Enough hosts required to meet FTT?
    2. Stripe Width: # of capacity devices available on rest of the hosts to meet stripe width?
    3. Capacity for Data Migration: Enough capacity available on rest of the hosts?
    4. Flash capacity: Enough flash available to meet flash read cache reservations (Only in Hybrid)
  7. Blade System Considerations: Not enough slots to scale local storage capacity.
    1. Consider External Storage Enclosures : Ensure VCG
  8. Processor Power Management Considerations: Avoid Extreme power-saving modes. Select ‘balanced’ mode.

Cluster Design Considerations

  1. 2/3 Node Configurations can tolerate only one failure
    1. Recommended 4+ node clusters
  2. vSphere HA
    1. VSAN works with vSphere HA
      1. Host failure: HA restarts VMs
      2. Network Partition: HA understands VSAN objects and restart VM on a partition that still has access to a quorum of the VM components.
    2. Requirements
      1. HA must use VSAN network for communication
      2. HA does not use vsanDatastore as a datastore heart beating
      3. HA needs to be disabled before enabling VSAN. HA may only be enabled after VSAN is configured.
    3. Additional capacity required to rebuild components
      1. VSAN does not interoperate with HA to ensure there is enough disk space available on remaining hosts in the cluster
  3. Fault Domains: Rack Availability
    1. No two copies/replicas of the Virtual Machines data will be placed in the same fault domain
    2. Consider additional resource requirements to rebuild the components.
    3. Requirement: Uniformly configured hosts (Balanced Configurations)
      1. Having unbalanced domains might mean that vSAN consumes the majority of space in one domain that has low capacity, and leaves stranded capacity in the domain that has larger capacity
  4. Deduplication and Compression considerations
    1. Single feature ( For both Deduplication and Compression)
    2. When this feature is enabled, objects will not be deterministically assigned to a capacity device in a disk group, but will stripe across all disks in the disk group.

Determining if Workload is suitable for VSAN

  1. Cache friendly applications
    1. If application is not cache friendly its performance depends on capacity devices
  2. VMWare View: View Planner for vSAN Sizing
  3. SDDC/VMWare Infrastructure: VMware Infrastructure Planner

 

VMWare VROPs – Design Notes
NSX Design – Best Practices Reference Notes
No tags for this post.

Leave a Comment