Skip to content

KubeStellar Quickstart Setup#

This Quick Start outlines step 1, shows a concrete example of steps 2--7 in the Installation and Usage outline, and forwards you to one example of the remaining steps. In this example you will create three new kind clusters to serve as your KubeFlex hosting cluster and two WECs.

  1. Install software prerequisites
  2. Cleanup from previous runs
  3. Create the KubeFlex hosting cluster and Kubestellar core components
  4. Create and register two WECs.
  5. Use KubeStellar to distribute a Deployment object to the two WECs.

Install software prerequisites#

The following command will check for the prerequisites that you will need for this quickstart. See the prerequisites doc for more details.

bash <(curl kflex ocm helm kubectl docker kind

This quickstart uses kind to create three Kubernetes cluster on your machine. Note that kind does not support three or more concurrent clusters unless you raise some limits as described in this kind "known issue": Pod errors due to “too many open files”.

Cleanup from previous runs#

If you have run this quickstart or any related recipe previously then you will first want to remove any related debris. The following commands tear down the state established by this quickstart.

kind delete cluster --name kubeflex
kind delete cluster --name cluster1
kind delete cluster --name cluster2
kubectl config delete-context kind-kubeflex
kubectl config delete-context cluster1
kubectl config delete-context cluster2

Set the Version appropriately as an environment variable#


Create a kind cluster to host KubeFlex#

For convenience, a new local Kind cluster that satisfies the requirements for KubeStellar setup and that can be used to commission the quickstart workload can be created with the following command:

bash <(curl -s --name kubeflex --port 9443

Use Core Helm chart to initialize KubeFlex and create ITS and WDS#

helm upgrade --install ks-core oci:// \
    --version $KUBESTELLAR_VERSION \
    --set-json='ITSes=[{"name":"its1"}]' \

Create and register two workload execution cluster(s)#

The following steps show how to create two new kind clusters and register them with the hub as described in the official open cluster management docs.

Note that kind does not support three or more concurrent clusters unless you raise some limits as described in this kind "known issue": Pod errors due to “too many open files”.

  1. Execute the following commands to create two kind clusters, named cluster1 and cluster2, and register them with the OCM hub. These clusters will serve as workload clusters. If you have previously executed these commands, you might already have contexts named cluster1 and cluster2. If so, you can remove these contexts using the commands kubectl config delete-context cluster1 and kubectl config delete-context cluster2.

    : set flags to "" if you have installed KubeStellar on an OpenShift cluster
    clusters=(cluster1 cluster2);
    for cluster in "${clusters[@]}"; do
       kind create cluster --name ${cluster}
       kubectl config rename-context kind-${cluster} ${cluster}
       clusteradm --context its1 get token | grep '^clusteradm join' | sed "s/<cluster_name>/${cluster}/" | awk '{print $0 " --context '${cluster}' --singleton '${flags}'"}' | sh

    The clusteradm command grabs a token from the hub (its1 context), and constructs the command to apply the new cluster to be registered as a managed cluster on the OCM hub.

  2. Repeatedly issue the command:

    kubectl --context its1 get csr

    until you see that the certificate signing requests (CSR) for both cluster1 and cluster2 exist. Note that the CSRs condition is supposed to be Pending until you approve them in step 4.

  3. Once the CSRs are created, approve the CSRs complete the cluster registration with the command:

    clusteradm --context its1 accept --clusters cluster1
    clusteradm --context its1 accept --clusters cluster2
  4. Check the new clusters are in the OCM inventory and label them:

    kubectl --context its1 get managedclusters
    kubectl --context its1 label managedcluster cluster1 location-group=edge name=cluster1
    kubectl --context its1 label managedcluster cluster2 location-group=edge name=cluster2

Exercise KubeStellar#

Proceed to Scenario 1 (multi-cluster workload deployment with kubectl) in the example scenarios after defining the shell variables that characterize the setup done above. Following are setting for those variables, whose meanings are defined at the start of the example scenarios document.