The Container Service Extension 4.0 has been launched with a number of important enhancements and extra use instances, together with Cluster API, lifecycle administration by way of a person interface, GPU assist for Kubernetes clusters, and integration with VMware Cloud Director as infrastructure. With its feature-rich person interface, prospects can carry out operations equivalent to creation, scaling, and upgrading on Tanzu Kubernetes clusters. Nonetheless, some prospects could search automation assist for these similar operations.
This weblog submit is meant for patrons who wish to automate the provisioning of Tanzu Kubernetes clusters on the VMware Cloud Director Tenant portal utilizing the VMware Cloud Director API. Though the VCD API is supported, the weblog submit is critical as a result of the Cluster API is used to create and handle TKG clusters on VCD. The payload required to carry out operations on TKG clusters requires some work to offer the Cluster API-generated payload. The weblog submit outlines the step-by-step course of for producing the right payload for patrons utilizing their VCD infrastructure.
Model Help:
This API information is relevant to clusters created by CSE 4.0 and CSE 4.0.1 Tanzu Kubernetes Clusters.
The prevailing conditions for patrons to create TKG clusters of their organizations additionally apply to the automation move. These conditions are summarized right here and will be discovered within the official documentation to onboard Supplier and Tenant Admin customers. The next sections present an outline of the necessities for each cloud supplier directors and Tenant Admin customers.
Cloud Supplier Admin Steps
The Steps to onboard the shoppers is demonstrated on this video and documented right here. As soon as buyer group and its customers are onboarded, they will use subsequent part to make use of APIs, or devour it to create automated Cluster operations.
As a fast abstract following steps are anticipated to be carried out by cloud supplier to onboard and put together the shopper:
- Overview Interoperability Matrix to assist Container Service Extension 4.0 and 4.0.1
- Enable essential communication for CSE server
- Begin CSE server and Onboard buyer group (Reference Demo and Official Documentation)
Buyer Org Admin Steps
When the cloud supplier has onboarded the shopper onto the Container Service Extension, the group administrator should create and assign customers with the potential to create and handle TKG clusters for the shopper group. This documentation outlines the process for making a person with the “Kubernetes cluster writer” function inside the tenant group.
It’s then assumed that the person “acmekco” has obtained the mandatory sources and entry inside the buyer group to execute Kubernetes cluster operations.
Generate ‘capiyaml’ payload
- Acquire VCD Infrastructure and Kubernetes Cluster particulars
This Operation requires following info for VCD tenant portal. The correct column describes instance values used as reference on this weblog submit.
Enter | Instance worth for this weblog |
VCD_SITE | VCD Deal with (https://vcd-01a.native) |
VCD_ORGANIZATION | Buyer Group title(ACME) |
VCD_ORGANIZATION_VDC | Buyer OVDC title (ACME_VDC_T) |
VCD_ORGANIZATION_VDC_NETWORK | Community title in buyer org (172.16.2.0) |
VCD_CATALOG | CSE shared catalog title (cse) |
Enter | Instance worth for this weblog |
VCD_TEMPLATE_NAME | Kubernetes and TKG model of the cluster(Ubuntu 20.04 and Kubernetes v1.22.9+vmware.1) |
VCD_CONTROL_PLANE_SIZING_POLICY | Sizing coverage of management aircraft vms(TKG small) |
VCD_CONTROL_PLANE_STORAGE_PROFILE | Storage profile for management aircraft of the cluster (Capability) |
VCD_CONTROL_PLANE_PLACEMENT_POLICY | Non-compulsory – Depart empty if not utilizing |
VCD_WORKER_SIZING_POLICY | Sizing coverage of employee nodes vms(TKG small) |
VCD_WORKER_PLACEMENT_POLICY | Non-compulsory – Depart empty if not utilizing |
VCD_WORKER_STORAGE_PROFILE | Storage profile for management aircraft of the cluster (Capability) |
CONTROL_PLANE_MACHINE_COUNT | 1 |
WORKER_MACHINE_COUNT | 1 |
VCD_REFRESH_TOKEN_B64 | “MHB1d0tXSllVb2twU2tGRjExNllCNGZnVWZqTm5UZ2U=” Ref VMware Doc to Generate token earlier than remodeling it to Base64 |
- Set up required instruments to generate the
capiyaml
. Consumer can use any Working System or a Digital Machine(together with Linux, Mac or Home windows) to generate the payload. - As soon as the tenant person has collected all the data, person should set up following elements equivalent to Clusterctl 1.1.3, Sort(0.17.0), and Docker (20.10.21) on finish person’s machine. The next step requires above collected info, and never the entry to VCD Infrastructure to generate capiyaml payload.
- Copy TKG CRS Recordsdata domestically. Incase the TKG model is lacking from the folder, be sure you have the templates created for the specified TKG variations. The Following desk supplies supported checklist of and so on, coredns, tkg, tkr variations for CSE 40 and CSE 4.0.1 launch. Alternatively this script to fetch the identical values from Tanzu Kubernetes Grid sources.
Kubernetes Model | Etcd ImageTag | CoreDNS ImageTag | Full Distinctive Model | OVA | TKG Product Model | TKr model |
v1.22.9+vmware.1 | v3.5.4_vmware.2 | v1.8.4_vmware.9 | v1.22.9+vmware.1-tkg.1 | ubuntu-2004-kube-v1.22.9+vmware.1-tkg.1-2182cbabee08edf480ee9bc5866d6933.ova | 1.5.4 | v1.22.9—vmware.1-tkg.1 |
v1.21.11+vmware.1 | v3.4.13_vmware.27 | v1.8.0_vmware.13 | v1.21.11+vmware.1-tkg.2 | ubuntu-2004-kube-v1.21.11+vmware.1-tkg.2-d788dbbb335710c0a0d1a28670057896.ova | 1.5.4 | v1.21.11—vmware.1-tkg.3 |
v1.20.15+vmware.1 | v3.4.13_vmware.23 | v1.7.0_vmware.15 | v1.20.15+vmware.1-tkg.2 | ubuntu-2004-kube-v1.20.15+vmware.1-tkg.2-839faf7d1fa7fa356be22b72170ce1a8.ova | 1.5.4 | v1.20.15—vmware.1-tkg.2 |
|
mkdir ~/infrastructure–vcd/ cd ~/infrastructure–vcd mkdir v1.0.0 cd v1.0.0 |
crs % ls -lrta
whole 0
drwxr-xr-x 6 bhatts employees 192 Jan 30 16:42 .
drwxr-xr-x 4 bhatts employees 128 Jan 30 16:42 tanzu
drwxr-xr-x 4 bhatts employees 128 Jan 30 16:51 cni
drwxr-xr-x 4 bhatts employees 128 Jan 30 16:54 cpi
drwxr-xr-x 6 bhatts employees 192 Jan 30 16:55 csi
drwxr-xr-x 13 bhatts employees 416 Jan 30 18:53 ..
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
|
v1.0.0% ls –lrta whole 280 drwxr–xr–x 3 bhatts employees 96 Jan 30 16:41 .. drwxr–xr–x 6 bhatts employees 192 Jan 30 16:42 crs –rw–r—r— 1 bhatts employees 9073 Jan 30 16:56 cluster–template–v1.20.8–crs.yaml –rw–r—r— 1 bhatts employees 9099 Jan 30 16:56 cluster–template–v1.20.8.yaml –rw–r—r— 1 bhatts employees 9085 Jan 30 16:57 cluster–template–v1.21.8–crs.yaml –rw–r—r— 1 bhatts employees 9023 Jan 30 16:57 cluster–template–v1.21.8.yaml –rw–r—r— 1 bhatts employees 9081 Jan 30 16:57 cluster–template–v1.22.9–crs.yaml –rw–r—r— 1 bhatts employees 9019 Jan 30 16:57 cluster–template–v1.22.9.yaml –rw–r—r— 1 bhatts employees 9469 Jan 30 16:57 cluster–template.yaml –rw–r—r— 1 bhatts employees 45546 Jan 30 16:58 infrastructure–elements.yaml –rw–r—r— 1 bhatts employees 165 Jan 30 16:58 metadata.yaml –rw–r—r— 1 bhatts employees 3355 Jan 30 18:53 clusterctl.yaml drwxr–xr–x 13 bhatts employees 416 Jan 30 18:53 .
crs % ls –lrta whole 0 drwxr–xr–x 6 bhatts employees 192 Jan 30 16:42 . drwxr–xr–x 4 bhatts employees 128 Jan 30 16:42 tanzu drwxr–xr–x 4 bhatts employees 128 Jan 30 16:51 cni drwxr–xr–x 4 bhatts employees 128 Jan 30 16:54 cpi drwxr–xr–x 6 bhatts employees 192 Jan 30 16:55 csi drwxr–xr–x 13 bhatts employees 416 Jan 30 18:53 .. |
- Copy the
~/infrastructure-vcd/v1.0.0/clusterctl.yaml
to~/.cluster-api/clusterctl.yaml.
- The ‘
clusterctl
‘ command makes use ofclusterctl.yaml
from~/.cluster-api/clusterctl.yaml
to create the capiyaml payload. Replace the infrastructure particulars from step one on this doc. - Replace the
suppliers.url
in~/.cluster-api/clusterctl.yaml
to~/infrastructure-vcd/v1.0.0/infrastructure-components.yaml.
|
suppliers: – title: “vcd” url: “~/infrastructure-vcd/v1.0.0/infrastructure-components.yaml” kind: “InfrastructureProvider” |
At this level, we’ll want a sort cluster to put in clusterctl to generate the payload. On this step, create Sort cluster to generate capiyaml payload and initialize clusterctl as follows:
Create a neighborhood cluster on mac // This may be equally executed on alternative of your working system.
type create cluster –config kind-cluster-with-extramounts.yaml
kubectl cluster-info –context kind-kind
kubectl config set-context kind-kind
kubectl get po -A -owide
clusterctl init –core cluster-api:v1.1.3 -b kubeadm:v1.1.3 -c kubeadm:v1.1.3
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
|
cat > type–cluster–with–extramounts.yaml <<EOF type: Cluster apiVersion: type.x–k8s.io/v1alpha4 nodes: – function: management–aircraft extraMounts: – hostPath: /var/run/docker.sock containerPath: /var/run/docker.sock EOF
Create a native cluster on mac // This may be equally executed on alternative of your working system.
type create cluster —config type–cluster–with–extramounts.yaml kubectl cluster–information —context type–type kubectl config set–context type–type kubectl get po –A –owide clusterctl init —core cluster–api:v1.1.3 –b kubeadm:v1.1.3 –c kubeadm:v1.1.3 |
Replace the beneath tkg labels to “Sort: Cluster” object and annotations.
apiVersion: cluster.x-k8s.io/v1beta1
type: Cluster
metadata:
labels:
ccm: exterior
cni: antrea
csi: exterior
title: api5
namespace: default
New Metadata:
apiVersion: cluster.x-k8s.io/v1beta1
type: Cluster
metadata:
labels:
cluster-role.tkg.tanzu.vmware.com/administration: “”
tanzuKubernetesRelease: v1.21.8—vmware.1-tkg.2
tkg.tanzu.vmware.com/cluster-name: api5
annotations:
osInfo: ubuntu,20.04,amd64
TKGVERSION: v1.4.3
title: api5
namespace: api5-ns
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
|
OLD Metadata:
apiVersion: cluster.x–k8s.io/v1beta1 type: Cluster metadata: labels: ccm: exterior cni: antrea csi: exterior title: api5 namespace: default
New Metadata:
apiVersion: cluster.x–k8s.io/v1beta1 type: Cluster metadata: labels: cluster–function.tkg.tanzu.vmware.com/administration: “” tanzuKubernetesRelease: v1.21.8—vmware.1–tkg.2 tkg.tanzu.vmware.com/cluster–title: api5 annotations: osInfo: ubuntu,20.04,amd64 TKGVERSION: v1.4.3 title: api5 namespace: api5–ns |
- At this level, the capiyaml is able to be consumed by VCD APIs to carry out numerous operations. For verification, ensure cluster title, namespace values are constant. Copy the content material of capiyaml to generate jsonstring utilizing comparable device as right here.
Following part describes all supported API operations for Tanzu Kubernetes Cluster on VMware Cloud Director:
Record Clusters
Record all clusters within the buyer group. for CSE 4.0 launch the CAPVCD model is 1.
|
GET https://{{vcd}}/cloudapi/1.0.0/entities/varieties/vmware/capvcdCluster/1 |
Information Cluster
Filter Cluster by title
|
GET https://{{vcd}}/cloudapi/1.0.0/entities/varieties/vmware/capvcdCluster/1?filter=title==clustername |
Get cluster by ID:
|
GET https://{{vcd}}/cloudapi/1.0.0/entities/id |
Get Kubeconfig of the cluster:
|
GET https://{{vcd}}/cloudapi/1.0.0/entities/id |
The Kubeconfig will be discovered as follows at: entity.standing.capvcd.personal.kubeconfig

Create a brand new Cluster
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
|
POST https://{{vcd}}/cloudapi/1.0.0/entityTypes/urn:vcloud:kind:vmware:capvcdCluster:1.1.0 “entityType”: “urn:vcloud:kind:vmware:capvcdCluster:1.1.0”, “title”: “demo”, “externalId”: null, “entity”: { “type”: “CAPVCDCluster”, “spec”: { “vcdKe”: { “isVCDKECluster”: true, “markForDelete”: false, “forceDelete”: false, “autoRepairOnErrors”: true }, “capiYaml”: “”apiVersion: cluster.x–k8s.io/v1beta1nkind: Clusternmetadata:n labels:n cluster–function.tkg.tanzu.vmware.com/administration: ““n tanzuKubernetesRelease: v1.22.9—vmware.1–tkg.2n tkg.tanzu.vmware.com/cluster–title: api4n title: api4n namespace: api4–nsn annotations:n osInfo: ubuntu,20.04,amd64n TKGVERSION: v1.5.4nspec:n clusterNetwork:n pods:n cidrBlocks:n – 100.96.0.0/11n serviceDomain: cluster.nativen companies:n cidrBlocks:n – 100.64.0.0/13n controlPlaneRef:n apiVersion: controlplane.cluster.x–k8s.io/v1beta1n type: KubeadmControlPlanen title: api4–management–aircraftn namespace: api4–nsn infrastructureRef:n apiVersion: infrastructure.cluster.x–k8s.io/v1beta1n type: VCDClustern title: api4n namespace: api4–nsn—napiVersion: v1ndata:n password: ““n refreshToken: WU4zdWY3b21FM1k1SFBXVVp6SERTZXZvREFSUXQzTlE=n username: dG9ueQ==nkind: Secretnmetadata:n title: capi–person–credentialsn namespace: api4–nsntype: Opaquen—napiVersion: infrastructure.cluster.x–k8s.io/v1beta1nkind: VCDClusternmetadata:n title: api4n namespace: api4–nsnspec:n loadBalancerConfigSpec:n vipSubnet: ““n org: starkn ovdc: vmware–cloudn ovdcNetwork: personal–snatn web site: https://vcd.tanzu.labn useAsManagementCluster: falsen userContext:n secretRef:n title: capi-user-credentialsn namespace: api4-nsn—napiVersion: infrastructure.cluster.x-k8s.io/v1beta1nkind: VCDMachineTemplatenmetadata:n title: api4-control-planen namespace: api4-ns nspec:n template:n spec:n catalog: CSE-Templatesn diskSize: 20Gin enableNvidiaGPU: falsen placementPolicy: nulln sizingPolicy: TKG smalln storageProfile: lab-shared-storagen template: Ubuntu 20.04 and Kubernetes v1.22.9+vmware.1n—napiVersion: controlplane.cluster.x-k8s.io/v1beta1nkind: KubeadmControlPlanenmetadata:n title: api4-control-planen namespace: api4-nsnspec:n kubeadmConfigSpec:n clusterConfiguration:n apiServer:n certSANs:n – localhostn – 127.0.0.1n controllerManager:n extraArgs:n enable-hostpath-provisioner: “true”n dns:n imageRepository: initiatives.registry.vmware.com/tkgn imageTag: v1.8.4_vmware.9n etcd:n native:n imageRepository: initiatives.registry.vmware.com/tkgn imageTag: v3.5.4_vmware.2n imageRepository: initiatives.registry.vmware.com/tkgn initConfiguration:n nodeRegistration:n criSocket: /run/containerd/containerd.sockn kubeletExtraArgs:n cloud-provider: externaln eviction-hard: nodefs.accessible<0%,nodefs.inodesFree<0%,imagefs.accessible<0percentn joinConfiguration:n nodeRegistration:n criSocket: /run/containerd/containerd.sockn kubeletExtraArgs:n cloud-provider: externaln eviction-hard: nodefs.accessible<0%,nodefs.inodesFree<0%,imagefs.accessible<0percentn customers:n – title: rootn sshAuthorizedKeys:n – “”n machineTemplate:n infrastructureRef:n apiVersion: infrastructure.cluster.x-k8s.io/v1beta1n type: VCDMachineTemplaten title: api4-control-planen namespace: api4-nsn replicas: 1n model: v1.22.9+vmware.1n—napiVersion: infrastructure.cluster.x-k8s.io/v1beta1nkind: VCDMachineTemplatenmetadata:n title: api4-md-0n namespace: api4-nsnspec:n template:n spec:n catalog: CSE-Templatesn diskSize: 20Gin enableNvidiaGPU: falsen placementPolicy: nulln sizingPolicy: TKG smalln storageProfile: lab-shared-storagen template: Ubuntu 20.04 and Kubernetes v1.22.9+vmware.1n—napiVersion: bootstrap.cluster.x-k8s.io/v1beta1nkind: KubeadmConfigTemplatenmetadata:n title: api4-md-0n namespace: api4-nsnspec:n template:n spec:n joinConfiguration:n nodeRegistration:n criSocket: /run/containerd/containerd.sockn kubeletExtraArgs:n cloud-provider: externaln eviction-hard: nodefs.accessible<0%,nodefs.inodesFree<0%,imagefs.accessible<0percentn customers:n – title: rootn sshAuthorizedKeys:n – “”n—napiVersion: cluster.x-k8s.io/v1beta1nkind: MachineDeploymentnmetadata:n title: api4-md-0n namespace: api4-nsnspec:n clusterName: api4n replicas: 1n selector:n matchLabels: nulln template:n spec:n bootstrap:n configRef:n apiVersion: bootstrap.cluster.x-k8s.io/v1beta1n type: KubeadmConfigTemplaten title: api4-md-0n namespace: api4-nsn clusterName: api4n infrastructureRef:n apiVersion: infrastructure.cluster.x-k8s.io/v1beta1n type: VCDMachineTemplaten title: api4-md-0n namespace: api4-nsn model: v1.22.9+vmware.1n” }, “apiVersion”: “capvcd.vmware.com/v1.1” } } |
Resize a Cluster
|
GET https://{{vcd}}/cloudapi/1.0.0/entities/varieties/vmware/capvcdCluster/1?filter=title==clustername |
- Fetch the Cluster ID(
"id": "urn:vcloud:entity:vmware:capvcdCluster:<ID>
) from the above API name’s output. - Copy the whole output of the API response.
- Notedown eTag Worth from API response header
- Modify “capiyaml” with following values:
- To resize Management Aircraft VMs Modify
kubeadmcontrolplane.spec.replicas
with desired variety of management aircraft vms. Notice solely odd numbers of management aircraft are supported. - To resize Employee Aircraft VMS Modify
MachineDeployment.spec.replicas
with desired variety of employee aircraft VMs
- To resize Management Aircraft VMs Modify
- Whereas performing the
PUT
API name, guarantee to incorporate fetched eTag worth as If-Match
|
PUT https://{{vcd}}/cloudapi/1.0.0/entities/{cluster-id from the GET API response} headers: Settle for: software/json; worth=37.0 Authorization: Bearer Token {token} If–Match: {eTag worth from earlier GET name} BODY: Copy whole physique from the earlier GET name, modify capiyaml values as described in above Modify step. |
Improve a Cluster
To Improve a cluster, Supplier admin must publish desired the Tanzu Kubernetes templates to the shopper group in catalog utilized by Container Service Extension.
accumulate the GET API response for the cluster to be upgraded as follows:
|
GET https://{{vcd}}/cloudapi/1.0.0/entities/varieties/vmware/capvcdCluster/1?filter=title==clustername |
- Fetch the Cluster ID(
"id": "urn:vcloud:entity:vmware:capvcdCluster:<ID>
) from the above API name’s output. - Copy the whole output of the API response.
- Notedown eTag Worth from API response header
- The client person performing cluster improve would require entry to Desk 3 info. Modify Following values matching the goal TKG model. The Following desk exhibits Improve for TKG model 1.5.4 from v1.20.15+vmware.1 to v1.22.9+vmware.1
Management Aircraft Model | Outdated Values | New Values |
VCDMachineTemplate | ||
VCDMachineTemplate.spec.template.spec.template | Ubuntu 20.04 and Kubernetes v1.20.15+vmware.1 | Ubuntu 20.04 and Kubernetes v1.22.9+vmware.1 |
KubeadmControlPlane | ||
KubeadmControlPlane.spec.model | v1.20.15+vmware.1 | v1.22.9+vmware.1 |
KubeadmControlPlane.spec.kubeadmConfigSpec.dns | imageTag: v1.7.0_vmware.15 | v1.8.4_vmware.9 |
KubeadmControlPlane.spec.kubeadmConfigSpec.etcd | v3.4.13_vmware.23 | v3.5.4_vmware.2 |
KubeadmControlPlane.spec.kubeadmConfigSpec.imageRepository | imageRepository: initiatives.registry.vmware.com/tkg | imageRepository: initiatives.registry.vmware.com/tkg |
Employee Node Model | ||
VCDMachineTemplate | ||
VCDMachineTemplate.spec.template.spec.template | Ubuntu 20.04 and Kubernetes v1.20.15+vmware.1 | Ubuntu 20.04 and Kubernetes v1.22.9+vmware.1 |
VCDMachineTemplate.spec.template.spec | ||
MachineDeployment | ||
MachineDeployment.spec.model | v1.20.15+vmware.1 | v1.22.9+vmware.1 |
- Whereas performing the
PUT
API name, guarantee to incorporate fetched eTag worth as If-Match
|
PUT https://{{vcd}}/cloudapi/1.0.0/entities/{cluster-id from the GET} headers: Settle for: software/json; worth=37.0 Authorization: Bearer Token &lt;token> If–Match: &lt; eTag worth from earlier GET name> BODY: Copy whole physique from the earlier GET name, modify capiyaml values as described in above step to modify capiyaml. |
Delete a Cluster
|
GET https://{{vcd}}/cloudapi/1.0.0/entities/varieties/vmware/capvcdCluster/1?filter=title==clustername |
- Fetch the Cluster ID(
"id": "urn:vcloud:entity:vmware:capvcdCluster:<ID>
) from the above API name’s output. - Copy the whole output of the API response.
- Notedown eTag Worth from API response header
- Add or modify the next fields to delete or forcefully delete the cluster below entity.spec.vcdke:
- “markForDelete”: true, –> Set the worth to true to delete the cluster
- “forceDelete”: true, –> Set this worth to true for Forceful deletion of a cluster
“org”: {
“title”: “acme”,
“id”: “urn:vcloud:org:cd11f6fd-67ba-40e5-853f-c17861120184”
}
}
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
|
PUT https://{{vcd}}/cloudapi/1.0.0/entities/{cluster-id from the GET API response} { “entityType”: “urn:vcloud:kind:vmware:capvcdCluster:1.1.0”, “title”: “demo”, “externalId”: null, “entity”: { “type”: “CAPVCDCluster”, “spec”: { “vcdKe”: { “isVCDKECluster”: true, —Add or modify the this discipline to delete the cluster “markForDelete”: true, — Add or modify the this discipline to drive delete the cluster “forceDelete”: false, “autoRepairOnErrors”: true }, “capiYaml”: “<Your capYaml payload generated from Step 5> }, . . . . #Different payload from the GET API response . . .
“org“: { “title“: “acme“, “id“: “urn:vcloud:org:cd11f6fd–67ba–40e5–853f–c17861120184“ } } |
Advice for API Utilization throughout automation
- DO NOT hardcode API urls with RDE variations. ALWAYS parameterize RDE variations. For instance:
POST https://{{vcd}}/cloudapi/1.0.0/entityTypes/urn:vcloud:kind:vmware:capvcdCluster:1.1.0
Guarantee to declare 1.1.0
as a variable. This may guarantee straightforward API consumer upgrades to future variations of CSE.
- Make sure the API consumer code ignores any unknown/further properties whereas unmarshaling the API response
#Sooner or later, subsequent model of capvcdCluster 1.2.0 could add extra properties (“add-ons”) to the payload.
# The outdated API consumer code should guarantee it doesn’t break on seeing newer properties sooner or later payloads.
{
standing: {
kubernetesVersion: 1.20.8,
nodePools: {},
add-ons: {} // new property sooner or later model
}
}
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
|
#For instance, capvcdCluster 1.1.0 API payload appears to be like like beneath { standing: { kubernetesVersion: 1.20.8, nodePools: {} } } #Sooner or later, subsequent model of capvcdCluster 1.2.0 could add extra properties (“add-ons”) to the payload. # The outdated API consumer code should guarantee it doesn’t break on seeing newer properties sooner or later payloads. { standing: { kubernetesVersion: 1.20.8, nodePools: {}, add–ons: {} // new property sooner or later model } } |
Abstract
To summarize, we checked out CRUD operations for a Tanzu Kubernetes clusters on VMware Cloud Director platform utilizing VMware Cloud Director supported APIs. Please be happy to checkout different sources for Container Service Extension as follows: