Skip to content

KubeStellar has multiple documentation versions to match its multiple releases.
Please make sure you are viewing the docs version which matches the release version of the code you are using!

Getting Started with KubeStellar#

This pages shows one concrete example of steps 2--7 from the full Installation and Usage outline. This example produces a simple single-host system suitable for kicking the tires, using kind to create three new clusters to serve as your KubeFlex hosting cluster and two WECs. This page concludes with forwarding you to one example of the remaining steps.

  1. Setup
    1. Install software prerequisites
    2. Cleanup from previous runs
    3. Create the KubeFlex hosting cluster and Kubestellar core components
    4. Create and register two WECs.
  2. Exercise KubeStellar

Setup#

This is one way to produce a very simple system, suitable for study but not production usage. For general setup information, see the full story.

Install software prerequisites#

The following command will check for the prerequisites that you will need for the later steps. See the prerequisites doc for more details.

bash <(curl https://raw.githubusercontent.com/kubestellar/kubestellar/v0.24.0/hack/check_pre_req.sh) kflex ocm helm kubectl docker kind

This setup recipe uses kind to create three Kubernetes clusters on your machine. Note that kind does not support three or more concurrent clusters unless you raise some limits as described in this kind "known issue": Pod errors due to “too many open files”.

Cleanup from previous runs#

If you have run this recipe or any related recipe previously then you will first want to remove any related debris. The following commands tear down the state established by this recipe.

kind delete cluster --name kubeflex
kind delete cluster --name cluster1
kind delete cluster --name cluster2
kubectl config delete-context kind-kubeflex
kubectl config delete-context cluster1
kubectl config delete-context cluster2

Set the Version appropriately as an environment variable#

export KUBESTELLAR_VERSION=0.24.0

Create a kind cluster to host KubeFlex#

For convenience, a new local Kind cluster that satisfies the requirements for playing the role of KubeFlex hosting cluster can be created with the following command:

bash <(curl -s https://raw.githubusercontent.com/kubestellar/kubestellar/v0.24.0/scripts/create-kind-cluster-with-SSL-passthrough.sh) --name kubeflex --port 9443

Use Core Helm chart to initialize KubeFlex and create ITS and WDS#

helm upgrade --install ks-core oci://ghcr.io/kubestellar/kubestellar/core-chart \
    --version $KUBESTELLAR_VERSION \
    --set-json='ITSes=[{"name":"its1"}]' \
    --set-json='WDSes=[{"name":"wds1"}]'

That command will print some notes about how to get kubeconfig "contexts" named "its1" and "wds1" defined. Do that, because those contexts are used in the following.

Create and register two workload execution cluster(s)#

The following steps show how to create two new kind clusters and register them with the hub as described in the official open cluster management docs.

Note that kind does not support three or more concurrent clusters unless you raise some limits as described in this kind "known issue": Pod errors due to “too many open files”.

  1. Execute the following commands to create two kind clusters, named cluster1 and cluster2, and register them with the OCM hub. These clusters will serve as workload clusters. If you have previously executed these commands, you might already have contexts named cluster1 and cluster2. If so, you can remove these contexts using the commands kubectl config delete-context cluster1 and kubectl config delete-context cluster2.

    : set flags to "" if you have installed KubeStellar on an OpenShift cluster
    flags="--force-internal-endpoint-lookup"
    clusters=(cluster1 cluster2);
    for cluster in "${clusters[@]}"; do
       kind create cluster --name ${cluster}
       kubectl config rename-context kind-${cluster} ${cluster}
       clusteradm --context its1 get token | grep '^clusteradm join' | sed "s/<cluster_name>/${cluster}/" | awk '{print $0 " --context '${cluster}' --singleton '${flags}'"}' | sh
    done
    

    The clusteradm command grabs a token from the hub (its1 context), and constructs the command to apply the new cluster to be registered as a managed cluster on the OCM hub.

  2. Repeatedly issue the command:

    kubectl --context its1 get csr
    

    until you see that the certificate signing requests (CSR) for both cluster1 and cluster2 exist. Note that the CSRs condition is supposed to be Pending until you approve them in step 4.

  3. Once the CSRs are created, approve the CSRs complete the cluster registration with the command:

    clusteradm --context its1 accept --clusters cluster1
    clusteradm --context its1 accept --clusters cluster2
    
  4. Check the new clusters are in the OCM inventory and label them:

    kubectl --context its1 get managedclusters
    kubectl --context its1 label managedcluster cluster1 location-group=edge name=cluster1
    kubectl --context its1 label managedcluster cluster2 location-group=edge name=cluster2
    

Exercise KubeStellar#

Proceed to Scenario 1 (multi-cluster workload deployment with kubectl) in the example scenarios after defining the shell variables that characterize the setup done above. Following are setting for those variables, whose meanings are defined at the start of the example scenarios document.

host_context=kind-kubeflex
its_cp=its1
its_context=its1
wds_cp=wds1
wds_context=wds1
wec1_name=cluster1
wec2_name=cluster2
wec1_context=$wec1_name
wec2_context=$wec2_name
label_query_both=location-group=edge
label_query_one=name=cluster1