VMware Cloud Foundation 3.5 Nested Deployment – Part 1


After attending a couple of sessions about VMware Cloud Foundation at VMworld I wanted to try deploying it myself and see how it works. I prepared myself, read a couple of documents like the Architecture and Deployment Guide and the Operations and Administrations Guide and started planning my first deployment.

I managed to get my hands on a couple of servers, which I could use as a test lab. As I had quite powerful machines for a test lab, but not many of them, I decided to deploy everything in a nested environment.  

This article describes how I installed vCF, what I learned, and the tweaks I had to use to get it configured. 

What is VMware Cloud Foundation (vCF)? 

 From https://docs.vmware.com/en/VMware-Cloud-Foundation/

VMware Cloud Foundation is an integrated software stack that bundles compute virtualization (VMware vSphere), storage virtualization (VMware vSAN), network virtualization (VMware NSX), and cloud management and monitoring (VMware vRealize Suite) into a single platform that can be deployed on premises as a private cloud or run as a service within a public cloud. Cloud Foundation helps to break down the traditional administrative silos in data centers, merging compute, storage, network provisioning, and cloud management to facilitate end-to-end support for application deployment. 

In other words, vCF deploys the known SDDC solutions like vSphere, vSAN, NSX and Log Insight automatically by following the VMware Validated Design recommendations. In a second phase, additional products such as vRealize Automation and vRealize Orchestrator can be deployed via the SDDC Manager, which is deployed together with the SDDC stack. 

The SDDC Manager manages the whole SDDC stack, upgrades it, extends it by deploying additional workload domains etc. 

My setup 

The hardware of the lab I used included 3 physical DL380 Gen9 servers, with two 20-core CPUs and 512 GB RAM each. Every server had 24 SSD drives for vSAN and 4 x 10 Gb/s NICs. 

In total, this cluster has 60 cores and 1.5TB of memory, which is sufficient for the whole SDDC stack to be deployed properly. The total of 90 TB of vSAN datastore was more than sufficient in capacity and in terms of I/O bandwidth. 

The installation was a simple vCenter 6.7 deployment, forming a cluster of 3 nodes with HA and DRS enabled. vSAN was configured for storage with FTT=1, and NSX-V for networking. 

Note: The above-mentioned setup is what I used, but neither vSAN nor NSX are a requirement for the underlying infrastructure to get vCF installed in a nested environment. 

By default, vCF 3.5 requires 52 vCPU and 116 GB of memory, and a total of 6.9TB of disk space, including 30% space reservation:

Management Workload Domain Calculations
  vCPU vRAM Storage
Total Resources 52 116 5266
Total with 30% free     6846

Physical environment preparation  

Before I was able to start the deployment of vCF, I had to prepare the existing underlying environment. 

Enable fake SCSI reservations on the underlying vSAN configuration

My first nested ESXi deployment failed on my vSAN datastore, I found the solution to the problem on William Lams blog (again… Thanks for sharing William).

On all the physical ESXi hosts, I entered following command in CLI:

esxcli system settings advanced set -o /VSAN/FakeSCSIReservations -i 1


The Portgroup I intended to use had to be adapted to allow network traffic coming from multiple MAC addresses. 
As I used vSphere 6.7, I could use the new Mac Learning feature, as described by William Lam in this post.  
Note: If you have a lower vSphere version for the underlying infrastructure, Security settings on the Portgroup you want to use for the nested environment will have to be set to allow Forged Transmits, MAC address changes and Forged Transmits.

On a Windows Machine with PowerCLI installed, I downloaded Williams Function and executed following commands:

Get-MacLearn -DVPortgroupName @("vxw-dvs-31-virtualwire-7-sid-9001-LS_LAB_Network")

DVPortgroup            : vxw-dvs-31-virtualwire-7-sid-9001-LS_LAB_Network
MacLearning            : False
NewAllowPromiscuous    : 
NewForgedTransmits     : 
NewMacChanges          : 
Limit                  : 
LimitPolicy            : 
LegacyAllowPromiscuous : False
LegacyForgedTransmits  : False
LegacyMacChanges       : False

Set-MacLearn -DVPortgroupName @("vxw-dvs-31-virtualwire-7-sid-9001-LS_LAB_Network") -EnableMacLearn $true -EnablePromiscuous $false -EnableForgedTransmit $true -EnableMacChange $false
Enabling MAC Learning on DVPortgroup: vxw-dvs-31-virtualwire-7-sid-9001-LS_LAB_Network ...

Get-MacLearn -DVPortgroupName @("vxw-dvs-31-virtualwire-7-sid-9001-LS_LAB_Network")

DVPortgroup            : vxw-dvs-31-virtualwire-7-sid-9001-LS_LAB_Network
MacLearning            : True
NewAllowPromiscuous    : False
NewForgedTransmits     : True
NewMacChanges          : False
Limit                  : 4096
LimitPolicy            : DROP
LegacyAllowPromiscuous : False
LegacyForgedTransmits  : False
LegacyMacChanges       : False

Note: I would not recommend using any of these solutions on production servers, but on a lab environment, they both work great. 


  • On the VXLAN I intended to use for the vCF lab setup, I deployed a new Windows Server VM to be used as Active Directory Server, DNS and DHCP. Following tasks were required for the setup preparation: 

    • Configuration of the AD Role and creation of a new domain for the VCF Lab
    • Creation of a new Domain Admin user
    • Configuration of DHCP and DNS (Authorize the server for the DHCP role in the new AD Domain, create DHCP scope with reservations for 4 nested ESXi hosts, create forward and reverse DNS entries for the nested ESXi hosts, the Cloudbuilder VM, vCenter, PSC, NSX Manager, vRealize LogInsight and SDDC Manager).  

Deploy nested ESXi hosts:

I created 4 VMs to be used as nested ESXi hosts for the management workload domain. These VMs had following specifications:

  • 10 vCPUs
  • 64 GB RAM
  • 4 Hard disks: 16 GB, 80 GB, 120 GB, 120 GB (thin provisioned)
  • 2 VMXNET3 NICs connected to the same Portgroup
  • Virtual hardware version 14
  • Guest OS: Other
  • Guest OS version: VMware ESXi 6.5 or later
  • Expose hardware assisted virtualization to the guest OS enabled
  • EFI firmware

When the VMs were created, I connected the vanilla ESXi installation ISO file to each of them, and deployed ESXi. I have never seen an ESXi installation finish as quickly as on this vSAN All-Flash datastore. 🙂

When the hosts were up, had an IP address, name and password defined, I had to perform another couple of tasks to get them ready for the vCF deployment. (I will not go through the details how to configure all this, as this is basic vSphere configuration.)

Following tasks are required on each one of the hosts:

  • Enable SSH (either in DCUI or the host client in a web browser)
  • Open an SSH session to each host.
  • Configure NTP:
	vi /etc/ntp.conf
	restrict default nomodify notrap nopeer noquery
	driftfile /etc/ntp.drift

	chkconfig ntpd on<br>
	/etc/init.d/ntpd restart<br>
	ntpq -p
  • Configure the hard drives for the nested vSAN datastore as SSD disks:
    • esxcli storage nmp satp rule add –satp=VMW_SATP_LOCAL –device mpx.vmhba0:C0:T1:L0 –option “enable_ssd”
    • esxcli storage nmp satp rule add –satp=VMW_SATP_LOCAL –device mpx.vmhba0:C0:T2:L0 –option “enable_ssd”
    • esxcli storage nmp satp rule add –satp=VMW_SATP_LOCAL –device mpx.vmhba0:C0:T3:L0 –option “enable_ssd”
    • reboot
    • re-login to SSH
    • esxcli storage core claiming unclaim –type=device –device mpx.vmhba0:C0:T1:L0
    • esxcli storage core claiming unclaim –type=device –device mpx.vmhba0:C0:T2:L0
    • esxcli storage core claiming unclaim –type=device –device mpx.vmhba0:C0:T3:L0
    • esxcli storage core claimrule load
    • esxcli storage core claimrule run
    • esxcli storage core claiming reclaim –device mpx.vmhba0:C0:T1:L0
    • esxcli storage core claiming reclaim –device mpx.vmhba0:C0:T2:L0
    • esxcli storage core claiming reclaim –device mpx.vmhba0:C0:T3:L0
    • esxcli storage core device list -d mpx.vmhba0:C0:T1:L0 | grep SSD
    • esxcli storage core device list -d mpx.vmhba0:C0:T2:L0 | grep SSD
    • esxcli storage core device list -d mpx.vmhba0:C0:T3:L0 | grep SSD
  • Ensure that the management IP address of each host is static. If this is not the case, following command sets it correctly:
    • esxcli network ip interface ipv4 set -i vmk0 -I IPADDRESS -N SUBNETMASK -g GATEWAY -t static


This concludes the preparation of the underlying infrastructure for the VMware Cloud Foundation deployment. The next article will continue at this point with the preparation steps for the vCF deployment itself. Stay tuned 🙂


Other articles in this series:

VMware Cloud Foundation 3.5 Nested Deployment – Part 1
VMware Cloud Foundation 3.5 Nested Deployment – Part 2

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.