Skip to main content

Deploy to VMware vSphere

This guide provides you with the steps to deploy a PCG cluster to a VMware vSphere environment. Before you begin the installation, carefully review the Prerequisites section.

further guidance

Refer to our Deploy App Workloads with a PCG tutorial for detailed guidance on how to deploy app workloads with a PCG.

Prerequisites

info

If you are using a self-hosted Palette instance or Palette VerteX, and you deployed the instance to a VMware vSphere environment, then you already have all the required permissions and roles. Proceed to the installation steps in the Deploy PCG guide.

  • A Palette API key. Refer to the Create API Key page for guidance.

    warning

    The installation does not work with Single Sign-On (SSO) credentials. You must use an API key from a local tenant admin account in Palette to deploy the PCG. After the PCG is configured and functioning, this local account is no longer used to keep the PCG connected to Palette, so you can deactivate the account if desired.

  • Download and install the Palette CLI from the Downloads page. Refer to the Palette CLI Install guide to learn more.

  • You will need to provide the Palette CLI an encryption passphrase to secure sensitive data. The passphrase must be between 8 to 32 characters long and contain a capital letter, a lowercase letter, a digit, and a special character. Refer to the Palette CLI Encryption section for more information.

The following system requirements must be met to install a PCG in VMware vSphere:

  • PCG IP address requirements:

    • One IP address for a single-node PCG or three IP addresses for a three-node PCG. Refer to the PCG Sizing section for more information on sizing.
    • One IP address reserved for cluster repave operations.
    • One IP address for the Virtual IP (VIP).
    • DNS can resolve the domain api.spectrocloud.com.
    • NTP server is reachable from the PCG.
  • A PCG requires the following minimum resources:

    • CPU: 4
    • Memory: 4 GiB
    • Storage: 60 GiB

    For production environments, we recommend using three nodes, each with 8 CPU, 8 GiB of memory, and 100 GiB of storage. Nodes can exhaust the 60 GiB storage with prolonged use. If you initially set up the gateway with one node, you can resize it at a later time.

  • An x86 Linux environment with an installed Docker daemon and connections to Palette and the VMware vSphere endpoint. The Palette CLI installation must be invoked on an up-to-date Linux system with an x86-64 architecture.

Before installing the PCG on VMware, review the following system requirements and permissions. The vSphere user account used to deploy the PCG must have the required permissions to access the proper roles and objects in vSphere.

Start by reviewing the required action items below:

  1. Create two custom vSphere roles. Check out the Create Required Roles section to create the required roles in vSphere.

  2. Review the vSphere Permissions section to ensure the created roles have the required vSphere privileges and permissions.

  3. Create node zones and regions for your Kubernetes clusters. Refer to the Zone Tagging section to ensure that the required tags are created in vSphere to ensure proper resource allocation across fault domains.

Create Required Roles

Palette requires two custom roles to be created in vSphere before the PCG installation. Refer to the Create a Custom Role guide if you need help creating a custom role in vSphere. The required custom roles are:

  • A root-level role with access to higher-level vSphere objects. This role is referred to as the Spectro root role. Check out the Root-Level Role Privileges table for the list of privileges required for the root-level role.

  • A role with the required privileges for deploying VMs. This role is referred to as the Spectro role. Review the Spectro Role Privileges table for the list of privileges required for the Spectro role.

The user account you use to deploy the PCG must have access to both roles. Each vSphere object required by Palette must have a Permission entry for the respective Spectro role. The following tables list the privileges required for each custom role.

info

For an in-depth explanation of vSphere authorization and permissions, check out the Understanding Authorization in vSphere resource.

vSphere Permissions

Click to reveal all required vSphere permissions

The VMware vSphere user account that deploys host clusters or private cloud gateways requires all the vSphere privileges listed in the following sections for specific vSphere objects.

Spectro Root Role Privileges

A Spectro root role must be created that contains each privilege in the following table.

Select the tab for the vSphere version you are using to view the required privileges.

info

The System.* privileges are added to all custom vSphere roles by default.

CategoryPrivileges
CNSSearchable
DatastoreBrowse datastore
HostConfiguration: Storage partition configuration
NetworkAssign network
SessionsValidate session
Storage ViewsView
SystemAnonymous
Read
View
VM Storage PoliciesView VM storage policies
vSphere TaggingCreate vSphere Tag
Edit vSphere Tag
Click here to view the latest vSphere version's raw API permission
  • Cns.Searchable
  • Datastore.Browse
  • Host.Config.Storage
  • InventoryService.Tagging.CreateTag
  • InventoryService.Tagging.EditTag
  • Network.Assign
  • Sessions.ValidateSession
  • StorageProfile.View
  • StorageViews.View
  • System.Anonymous
  • System.Read
  • System.View

Spectro Root Role Assignments

The privileges associated with the Spectro root role must be granted via role assignments on specific vSphere objects for either the user or a group containing the user. Review the required role assignments to ensure that your user has all required privileges on all required objects.

info

Propagation refers to the inheritance of permissions from a parent vSphere object to a child object. If a permission is propagated to a child object, the child object inherits the permission from the parent object.

vSphere ObjectPropagationRoleCondition
vCenter RootNoSpectro root role
Target DatacenterNoSpectro root role
Target ClusterNoSpectro root role
Distributed SwitchNoSpectro root roleIf the Target Network is a Distributed Port Group

Spectro Role Privileges

A Spectro role must be created that contains each privilege in the following table.

Select the tab for the vSphere version you are using to view the required privileges.

CategoryPrivileges
CNSSearchable
DatastoreAllocate space
Browse datastore
Low level file operations
Remove file
Update virtual machine files
Update virtual machine metadata
FolderCreate folder
Delete folder
Move folder
Rename folder
HostLocal Operations: Reconfigure virtual machine
NetworkAssign network
ResourceApply recommendation
Assign virtual machine to resource pool
Migrate powered off virtual machine
Migrate powered on virtual machine
Query vMotion
SessionsValidate session
Storage ViewsView
SystemAnonymous
Read
View
TasksCreate task
Update task
vAppImport
View OVF environment
vApp application configuration
vApp instance configuration
VM Storage PoliciesView VM storage policies
vSANCluster: ShallowRekey
vSphere TaggingAssign or Unassign vSphere Tag
Create vSphere Tag
Delete vSphere Tag
Edit vSphere Tag

The following table lists Spectro role privileges for VMs by category. All privileges are for the vSphere object, Virtual Machines.

CategoryPrivileges
Change ConfigurationAcquire disk lease
Add existing disk
Add new disk
Add or remove device
Advanced configuration
Change CPU count
Change memory
Change settings
Change swapfile placement
Change resource
Configure host USB device
Configure raw device
Configure managedBy
Display connection settings
Extend virtual disk
Modify device settings
Query fault tolerance compatibility
Query unowned files
Reload from path
Remove disk
Rename
Reset guest information
Set annotation
Toggle disk change tracking
Toggle fork parent
Upgrade virtual machine compatibility
Edit InventoryCreate from existing
Create new
Move
Register
Remove
Unregister
Guest OperationsGuest operation alias modification
Guest operation alias query
Guest operation modifications
Guest operation program execution
Guest operation queries
InteractionConsole interaction
Power on
Power off
ProvisioningAllow disk access
Allow file access
Allow read-only disk access
Allow virtual machine download
Allow virtual machine files upload
Clone template
Clone virtual machine
Create template from virtual machine
Customize guest
Deploy template
Mark as template
Mark as virtual machine
Modify customization specification
Promote disks
Read customization specifications
Service ConfigurationAllow notifications
Allow polling of global event notifications
Manage service configurations
Modify service configuration
Query service configurations
Read service configuration
Snapshot ManagementCreate snapshot
Remove snapshot
Rename snapshot
Revert to snapshot
vSphere ReplicationConfigure replication
Manage replication
Monitor replication
Click here to view the latest vSphere version's raw API permission
  • Cns.Searchable
  • Datastore.AllocateSpace
  • Datastore.Browse
  • Datastore.DeleteFile
  • Datastore.FileManagement
  • Datastore.UpdateVirtualMachineFiles
  • Datastore.UpdateVirtualMachineMetadata
  • Folder.Create
  • Folder.Delete
  • Folder.Move
  • Folder.Rename
  • Host.Local.ReconfigVM
  • InventoryService.Tagging.AttachTag
  • InventoryService.Tagging.CreateTag
  • InventoryService.Tagging.DeleteTag
  • InventoryService.Tagging.EditTag
  • Network.Assign
  • Resource.ApplyRecommendation
  • Resource.AssignVMToPool
  • Resource.ColdMigrate
  • Resource.HotMigrate
  • Resource.QueryVMotion
  • Sessions.ValidateSession
  • StorageProfile.View
  • StorageViews.View
  • System.Anonymous
  • System.Read
  • System.View
  • Task.Create
  • Task.Update
  • VApp.ApplicationConfig
  • VApp.ExtractOvfEnvironment
  • VApp.Import
  • VApp.InstanceConfig
  • VirtualMachine.Config.AddExistingDisk
  • VirtualMachine.Config.AddNewDisk
  • VirtualMachine.Config.AddRemoveDevice
  • VirtualMachine.Config.AdvancedConfig
  • VirtualMachine.Config.Annotation
  • VirtualMachine.Config.CPUCount
  • VirtualMachine.Config.ChangeTracking
  • VirtualMachine.Config.DiskExtend
  • VirtualMachine.Config.DiskLease
  • VirtualMachine.Config.EditDevice
  • VirtualMachine.Config.HostUSBDevice
  • VirtualMachine.Config.ManagedBy
  • VirtualMachine.Config.Memory
  • VirtualMachine.Config.MksControl
  • VirtualMachine.Config.QueryFTCompatibility
  • VirtualMachine.Config.QueryUnownedFiles
  • VirtualMachine.Config.RawDevice
  • VirtualMachine.Config.ReloadFromPath
  • VirtualMachine.Config.RemoveDisk
  • VirtualMachine.Config.Rename
  • VirtualMachine.Config.ResetGuestInfo
  • VirtualMachine.Config.Resource
  • VirtualMachine.Config.Settings
  • VirtualMachine.Config.SwapPlacement
  • VirtualMachine.Config.ToggleForkParent
  • VirtualMachine.Config.UpgradeVirtualHardware
  • VirtualMachine.GuestOperations.Execute
  • VirtualMachine.GuestOperations.Modify
  • VirtualMachine.GuestOperations.ModifyAliases
  • VirtualMachine.GuestOperations.Query
  • VirtualMachine.GuestOperations.QueryAliases
  • VirtualMachine.Hbr.ConfigureReplication
  • VirtualMachine.Hbr.MonitorReplication
  • VirtualMachine.Hbr.ReplicaManagement
  • VirtualMachine.Interact.ConsoleInteract
  • VirtualMachine.Interact.PowerOff
  • VirtualMachine.Interact.PowerOn
  • VirtualMachine.Inventory.Create
  • VirtualMachine.Inventory.CreateFromExisting
  • VirtualMachine.Inventory.Delete
  • VirtualMachine.Inventory.Move
  • VirtualMachine.Inventory.Register
  • VirtualMachine.Inventory.Unregister
  • VirtualMachine.Namespace.Event
  • VirtualMachine.Namespace.EventNotify
  • VirtualMachine.Namespace.Management
  • VirtualMachine.Namespace.ModifyContent
  • VirtualMachine.Namespace.Query
  • VirtualMachine.Namespace.ReadContent
  • VirtualMachine.Provisioning.Clone
  • VirtualMachine.Provisioning.CloneTemplate
  • VirtualMachine.Provisioning.CreateTemplateFromVM
  • VirtualMachine.Provisioning.Customize
  • VirtualMachine.Provisioning.DeployTemplate
  • VirtualMachine.Provisioning.DiskRandomAccess
  • VirtualMachine.Provisioning.DiskRandomRead
  • VirtualMachine.Provisioning.FileRandomAccess
  • VirtualMachine.Provisioning.GetVmFiles
  • VirtualMachine.Provisioning.MarkAsTemplate
  • VirtualMachine.Provisioning.MarkAsVM
  • VirtualMachine.Provisioning.ModifyCustSpecs
  • VirtualMachine.Provisioning.PromoteDisks
  • VirtualMachine.Provisioning.PutVmFiles
  • VirtualMachine.Provisioning.ReadCustSpecs
  • VirtualMachine.State.CreateSnapshot
  • VirtualMachine.State.RemoveSnapshot
  • VirtualMachine.State.RenameSnapshot
  • VirtualMachine.State.RevertToSnapshot
  • Vsan.Cluster.ShallowRekey
info

The System.* privileges are added to all custom vSphere roles by default.

Spectro Role Assignments

The privileges associated with the Spectro role must be granted via role assignments on specific vSphere objects for either the user or a group containing the user. Review the required role assignments to ensure that your user has all required privileges on all required objects.

vSphere ObjectPropagationRoleCondition
Target NetworkYesSpectro role
Target ClusterNoSpectro roleRequired if using a cluster's default Resources resource pool.
Target Resource PoolYesSpectro roleRequired if using a non-default resource pool.
All ESXi hosts within the Target ClusterNoSpectro role
Target DatastoreYesSpectro role
spectro-templates FolderYesSpectro roleMust be manually created in advance, assigned permissions, and populated with Spectro Cloud VM Templates.
Target VM FolderYesSpectro roleFor air-gapped installs, it must be manually created in advance and permissions assigned. For connected installs it is created automatically.

Zone Tagging

You can use tags to create node zones and regions for your Kubernetes clusters. The node zones and regions can be used to dynamically place Kubernetes workloads and achieve higher availability. Kubernetes nodes inherit the zone and region tags as Labels. Kubernetes workloads can use the node labels to ensure that the workloads are deployed to the correct zone and region.

The following is an example of node labels that are discovered and inherited from vSphere tags. The tag values are applied to Kubernetes nodes in vSphere.

LabelValue
topology.kubernetes.io/regionusdc
topology.kubernetes.io/zonezone3
failure-domain.beta.kubernetes.io/regionusdc
failure-domain.beta.kubernetes.io/zonezone3
info

To learn more about node zones and regions, refer to the Node Zones/Regions Topology section of the Cloud Provider Interface documentation.

Zone tagging is required to install Palette and is helpful for Kubernetes workloads deployed in vSphere clusters through Palette if they have persistent storage needs. Use vSphere tags on data centers and compute clusters to create distinct zones in your environment. You can use vSphere Tag Categories and Tags to create zones in your vSphere environment and assign them to vSphere objects.

The zone tags you assign to your vSphere objects, such as a data center and clusters, are applied to the Kubernetes nodes you deploy through Palette into your vSphere environment. Kubernetes clusters deployed to other infrastructure providers, such as public cloud, may have other native mechanisms for auto discovery of zones.

For example, assume a vCenter environment contains three compute clusters, cluster-1, cluster-2, and cluster-3. To support this environment you create the tag categories k8s-region and k8s-zone. The k8s-region is assigned to the data center, and the k8s-zone tag is assigned to the compute clusters.

The following table lists the tag values for the data center and compute clusters.

vSphere ObjectAssigned NameTag CategoryTag Value
Datacenterdc-1k8s-regionregion1
Clustercluster-1k8s-zoneaz1
Clustercluster-2k8s-zoneaz2
Clustercluster-3k8s-zoneaz3

Create a tag category and tag values for each data center and cluster in your environment. Use the tag categories to create zones. Use a name that is meaningful and that complies with the tag requirements listed in the following section.

Tag Requirements

The following requirements apply to tags:

  • A valid tag must consist of alphanumeric characters.

  • The tag must start and end with alphanumeric characters.

  • The regex used for tag validation is (([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?.

Deploy PCG

  1. On your Linux host with the Palette CLI installed, open a terminal session.

  2. Create a Palette CLI encryption passphrase and set it as an environment variable. Replace <palette-cli-encryption-passphrase> with your passphrase.

    export PALETTE_ENCRYPTION_PASSWORD=<palette-cli-encryption-passphrase>
  3. Issue the following command to authenticate your Palette CLI installation with Palette. When prompted, enter the required information. Refer to the table below for information about each parameter.

    palette login
    ParameterDescription
    Spectro Cloud ConsoleEnter the Palette endpoint URL. When using the Palette SaaS service, enter https://console.spectrocloud.com. When using a self-hosted instance of Palette, enter the URL for that instance.
    Allow Insecure ConnectionBypass x509 server Certificate Authority (CA) verification. Enter y if you are using a self-hosted Palette or Palette VerteX instance with self-signed TLS certificates and need to provide a file path to the instance CA. Otherwise, enter n.
    Spectro Cloud API KeyEnter your Palette API Key. Refer to the Create API Key guide for more information.
    Spectro Cloud OrganizationSelect your Palette organization name.
    Spectro Cloud ProjectSelect the project you want to register your VMware vSphere account in.
    AcknowledgeAccept the login banner message. Login banner messages are only displayed if the tenant admin enabled a login banner.
    info

    The CloudAccount.apiKey and Mgmt.apiKey values in the pcg.yaml file are encrypted and cannot be manually updated. To change these values, use the palette pcg install --update-passwords command. Refer to the PCG command reference page for more information.

  4. Once you have authenticated your Palette CLI installation, start the PCG installer by issuing the following command. Refer to the table below for information about each parameter.

    palette pcg install
    ParameterDescription
    Management Plane TypeSelect Palette or VerteX.
    Enable Ubuntu Pro (required for production)Enter y if you want to use Ubuntu Pro and provide an Ubuntu Pro token. Otherwise, enter n.
    Select an image registry typeFor a non-airgap installation, choose Default to pull images from public image registries. This requires an internet connection. For airgapped installations, select Custom and point to your airgap support VM or a custom internal registry that contains the required images.
    Share PCG Cloud Account across platform ProjectsEnter y if you want the cloud account associated with the PCG to be available from all projects within your organization. Enter n if you want the cloud account to only be available at the tenant admin scope.
    Cloud TypeSelect VMware vSphere.
    Private Cloud Gateway NameEnter a custom name for the PCG.
  5. If you want to configure your PCG to use a proxy network, complete the following fields, as appropriate.

    info

    By default, proxy environment variables (HTTPS_PROXY, HTTP_PROXY, and NO_PROXY) configured during PCG installation are propagated to all PCG cluster nodes, as well as the nodes of all tenant workload clusters deployed with the PCG. However, proxy CA certificates are only propagated to PCG cluster nodes; they are not propagated the nodes of tenant workload clusters.

    ParameterDescription
    HTTPS ProxyLeave this blank unless you are using an HTTPS Proxy. This setting will be propagated to all PCG nodes in the cluster, as well as all tenant clusters using the PCG. Example: https://USERNAME:PASSWORD@PROXYIP:PROXYPORT.
    HTTP ProxyLeave this blank unless you are using an HTTP Proxy. This setting will be propagated to all PCG nodes in the cluster, as well as all tenant clusters using the PCG. Example: http://USERNAME:PASSWORD@PROXYIP:PROXYPORT.
    No ProxyProvide a list of local network CIDR addresses, hostnames, and domain names that should be excluded from being a proxy. This setting will be propagated to all the nodes to bypass the proxy server, as well as all tenant clusters using the PCG. Example for a self-hosted environment: my.company.com,10.10.0.0/16.
    Proxy CA Certificate Filepath(Optional) Provide the file path of a CA certificate on the installer host. If provided, this CA certificate will be copied to each PCG node when deploying the PCG cluster, and the provided path will be used on the PCG cluster nodes. Example: /usr/local/share/ca-certificates/ca.crt.

    Note that proxy CA certificates are not automatically propagated to tenant clusters using the PCG; these certificates must be added at either the tenant level or cluster profile level in the OS layer.
    Configure Proxy CA Certificate for Workload Clusters

    If you are configuring proxy CA certificates for your PCG, they must also be added to workload clusters at the tenant level or cluster profile level in the OS layer.

    • If configured at the tenant level, all workload clusters provisioned from the tenant, with the exception of managed Kubernetes clusters (EKS, AKS, and GKE) and Edge clusters, will have the CA certificate injected into their cluster nodes.

    • If configured at the cluster profile level, only workload clusters deployed using the cluster profile will be injected with the CA certificate.


    To configure your proxy CA certificate for your workload clusters, use one of the following methods.

    Take the following approach to propagate your proxy server CA certificate to all workload cluster nodes provisioned from the tenant, with the exception of managed Kubernetes clusters (EKS, AKS, and GKE) and Edge clusters.

    1. Log in to Palette as a tenant admin.

    2. From the left main menu, select Tenant Settings.

    3. From the Tenant Settings Menu, below Platform, select Certificates.

    4. Select Add A New Certificate.

    5. In the Add Certificate dialog, enter the Certificate Name and Certificate value.

    6. Confirm your changes.

  6. Enter the following network details.

    ParameterDescription
    Pod CIDREnter the CIDR pool that will be used to assign IP addresses to pods in the PCG cluster. The pod IP addresses should be unique and not overlap with any machine IPs in the environment.
    Service IP RangeEnter the IP address range that will be used to assign IP addresses to services in the PCG cluster. The service IP addresses should be unique and not overlap with any machine IPs in the environment.
  7. If you selected Custom for the image registry type, you are prompted to provide the following information.

    ParameterDescription
    Registry NameAssign a name to the custom registry.
    Registry EndpointEnter the endpoint or IP address for the custom registry. Example: https://palette.example.com or https://10.10.1.0.
    Registry Base Content PathEnter the base content path for the custom registry. Example: spectro-images.
    Configure Registry MirrorCustomize the default mirror registry settings. Your system default text editor, such as Vi, will open and allow you to make any desired changes. When finished, save and exit the file.
    Allow Insecure Connection (Bypass x509 Verification)Bypass x509 CA verification. Enter n if using a custom registry with self-signed SSL certificates. Otherwise, enter y. If you enter y, you receive a follow-up prompt asking you to provide the file path to the CA certificate.
    Registry CA certificate Filepath(Optional) Enter the CA certificate for the custom registry. Provide the file path of the CA certificate on the installer host. Example: /usr/local/share/ca-certificates/ca.crt.
    Registry UsernameEnter the username for the custom registry.
    PasswordEnter the password for the custom registry.
  1. Next, fill out the VMware resource configurations.

    ParameterDescription
    DatacenterEnter the vSphere data center to target when deploying the PCG cluster.
    FolderEnter the folder to target when deploying the PCG cluster.
    NetworkEnter the port group to connect the PCG cluster to.
    Resource PoolEnter the resource pool to target when deploying the PCG cluster.
    ClusterEnter the compute cluster to use for the PCG deployment.
    Select specific Datastore or use a VM Storage PolicyEnter the datastore or VM Storage policy to apply to the PCG cluster.
    DatastoreEnter the datastore to use for the PCG deployment.
    Add another Fault DomainSpecify any fault domains you would like to use.
    NTP ServersSpecify the IP address for any Network Time Protocol (NTP) servers the PCG cluster can reference. We recommend you specify at least one NTP server.
    SSH Public KeyProvide the public OpenSSH key for the PCG cluster. Use this key when establishing an SSH connection with the PCG cluster. Your system default text editor, such as Vi, will open and prompt you to enter the SSH key. Save and exit the file when finished.
    Number of NodesEnter the number of nodes that will make up the cluster. Available options are 1 or 3. We recommend three nodes for a High Availability (HA) cluster in a production environment.
  2. Specify the IP pool configuration. You have the option to select a static placement or use Dynamic Host Configuration Protocol (DHCP). With static placement, an IP pool is created, and the PCG VMs are assigned IP addresses from the selected pool. With DHCP, PCG VMs are assigned IP addresses via DNS. Review the following tables to learn more about each parameter.

    warning

    If you select Static Placement, you must create a PCG IPAM pool before deploying clusters. Refer to the Create and Manage IPAM Node Pools guide for more information.

    Static Placement Configuration
    ParameterDescription
    IP Start rangeEnter the first address in the PCG IP pool range.
    IP End rangeEnter the last address in the PCG IP pool range.
    Network PrefixEnter the network prefix for the IP pool range. Valid values are network CIDR subnet masks from the range 0 - 32. Example: 18.
    Gateway IP AddressEnter the IP address of the IP gateway.
    Name serversEnter a comma-separated list of DNS name server IP addresses.
    Name server search suffixes (optional)Enter a comma-separated list of DNS search domains.
    DHCP Placement Configuration
    ParameterDescription
    Search domainsEnter a comma-separated list of DNS search domains.
  3. Specify the cluster boot configuration.

    ParameterDescription
    Patch OS on bootIndicate whether to patch the OS of the PCG hosts on the first boot.
    Reboot nodes once OS patch is appliedIndicate whether to reboot PCG nodes after OS patches are complete. This applies only if Patch OS on boot is enabled.
  4. Enter the vSphere Machine configuration for the Private Cloud Gateway. We recommend M or greater for production workloads.

    ParameterDescription
    S4 CPUs, 4 GB of memory, and 60 GB of storage
    M8 CPUs, 8 GB of memory, and 100 GB of storage
    L16 CPUs, 16 GB of memory, and 120 GB of storage
    CustomSpecify a custom configuration. If you select Custom, you are prompted to enter the number of CPUs, memory, and storage to allocate to the PCG VM. Refer to the Custom Machine Configuration table for more information.

    Custom Machine Configuration

    ParameterDescription
    CPUThe number of CPUs in the VM.
    MemoryThe number of memory to allocate to the VM.
    StorageThe amount of storage to allocate to the VM.
  5. Specify the node affinity configuration.

    ParameterDescription
    Node AffinityEnter y to schedule all Palette pods on the control plane node.
  6. A new PCG configuration file is generated, and its location is displayed on the console.

    Example output
    ==== PCG config saved ====
    Location: :/home/demo/.palette/pcg/pcg-20230706150945/pcg.yaml

    The Palette CLI begins provisioning a PCG cluster in your VMware vSphere environment. Take the following steps to monitor the progress of the PCG deployment.

    1. Log in to Palette as a tenant admin.

    2. From the left main menu, select Tenant Settings.

    3. From the Tenant Settings Menu, below Infrastructure, select Private Cloud Gateways.

    4. Select the PCG cluster being deployed. Use the Events tab to monitor the deployment progress of your PCG cluster.

    If you encounter issues during the installation, refer to our PCG Troubleshooting guide. For additional assistance, reach out to our Customer Support team.

    warning

    You cannot modify a deployed PCG cluster. If you need to make changes to your PCG cluster, you must delete the existing PCG cluster and redeploy it with your updated configurations. For this reason, we recommend you save your PCG configuration file for future use. Use the Palette CLI --config-only flag to save the PCG configuration file without deploying the PCG cluster. Refer to our Generate a Configuration File guide.

  7. To avoid potential vulnerabilities, once your PCG cluster is deployed, remove the kind images that were installed in the environment where you initiated the installation.

    Issue the following command to list all instances of kind that exist in the environment.

    docker images
    Example output
    REPOSITORY     TAG        IMAGE ID       CREATED        SIZE
    kindest/node v1.26.13 131ad18222cc 5 months ago 910MB

    Then, use the following command template to remove all instances of kind. Replace <tag> with your kind image tag.

    docker image rm kindest/node:<tag>

    Consider the following example for reference.

    Example command
    docker image rm kindest/node:v1.26.13
    Example output
    Untagged: kindest/node:v1.26.13
    Untagged: kindest/node@sha256:15ae92d507b7d4aec6e8920d358fc63d3b980493db191d7327541fbaaed1f789
    Deleted: sha256:131ad18222ccb05561b73e86bb09ac3cd6475bb6c36a7f14501067cba2eec785
    Deleted: sha256:85a1a4dfc468cfeca99e359b74231e47aedb007a206d0e2cae2f8290e7290cfd

Validate

Once installed, the PCG registers itself with Palette. To verify the PCG is registered, take the following steps.

  1. Log in to Palette as a tenant admin.

  2. From the left main menu, select Tenant Settings.

  3. From the Tenant Settings Menu, below Infrastructure, select Private Cloud Gateways.

  4. Verify your PCG cluster is displayed and that it has a green check mark for its Health.

  5. Next, from the Tenant Settings Menu, below Infrastructure, select Cloud Accounts.

  6. Verify a new VMware vSphere cloud account is displayed.

Next Steps

After you have successfully deployed the PCG into your VMware vSphere environment, you can deploy clusters into your VMware vSphere environment. If you selected Static Placement, make sure you define an IP Address Management (IPAM) node pool that Kubernetes clusters deployed in vSphere can use. To learn more about creating and defining node pools, refer to the Create and Manage IPAM Node Pools guide.