Getting Started with KubeStellar#
This page shows one concrete example of steps 2--7 from the full Installation and Usage outline. This example produces a simple single-host system suitable for kicking the tires, using kind to create three new clusters to serve as your KubeFlex hosting cluster and two WECs. This page concludes with forwarding you to one example of the remaining steps.
- Setup
- Install software prerequisites
- Cleanup from previous runs
- Create the KubeFlex hosting cluster and Kubestellar core components
- Create and register two WECs.
- Exercise KubeStellar
- Troubleshooting
Setup#
This is one way to produce a very simple system, suitable for study but not production usage. For general setup information, see the full story.
Note for Windows users#
For some users on WSL, use of the setup procedure on this page and/or the demo environment creation script may require running as the user root
in Linux. There is a known issue about this.
Install software prerequisites#
The following command will check for the prerequisites that you will need for the later steps. See the prerequisites doc for more details.
bash <(curl https://raw.githubusercontent.com/kubestellar/kubestellar/v0.25.0-rc.2/hack/check_pre_req.sh) kflex ocm helm kubectl docker kind
This setup recipe uses kind to create three Kubernetes clusters on your machine.
Note that kind
does not support three or more concurrent clusters unless you raise some limits as described in this kind
"known issue": Pod errors due to “too many open files”.
Cleanup from previous runs#
If you have run this recipe or any related recipe previously then you will first want to remove any related debris. The following commands tear down the state established by this recipe.
kind delete cluster --name kubeflex
kind delete cluster --name cluster1
kind delete cluster --name cluster2
kubectl config delete-context cluster1
kubectl config delete-context cluster2
Set the Version appropriately as an environment variable#
Create a kind cluster to host KubeFlex#
For convenience, a new local Kind cluster that satisfies the requirements for playing the role of KubeFlex hosting cluster can be created with the following command:
bash <(curl -s https://raw.githubusercontent.com/kubestellar/kubestellar/v0.25.0-rc.2/scripts/create-kind-cluster-with-SSL-passthrough.sh) --name kubeflex --port 9443
Use Core Helm chart to initialize KubeFlex and create ITS and WDS#
helm upgrade --install ks-core oci://ghcr.io/kubestellar/kubestellar/core-chart \
--version $kubestellar_version \
--set-json='ITSes=[{"name":"its1"}]' \
--set-json='WDSes=[{"name":"wds1"}]' \
--set-json='verbosity.default=5' # so we can debug your problem reports
That command will print some notes about how to get kubeconfig "contexts" named "its1" and "wds1" defined. Do that, because those contexts are used in the steps that follow.
kubectl config use-context kind-kubeflex # this is here only to remind you, it will already be the current context if you are following this recipe exactly
kflex ctx --set-current-for-hosting # make sure the KubeFlex CLI's hidden state is right for what the Helm chart just did
kflex ctx --overwrite-existing-context wds1
kflex ctx --overwrite-existing-context its1
Create and register two workload execution cluster(s)#
The following steps show how to create two new kind
clusters and
register them with the hub as described in the
official open cluster management docs.
Note that kind
does not support three or more concurrent clusters unless you raise some limits as described in this kind
"known issue": Pod errors due to “too many open files”.
-
Execute the following commands to create two kind clusters, named
cluster1
andcluster2
, and register them with the OCM hub. These clusters will serve as workload clusters. If you have previously executed these commands, you might already have contexts namedcluster1
andcluster2
. If so, you can remove these contexts using the commandskubectl config delete-context cluster1
andkubectl config delete-context cluster2
.: set flags to "" if you have installed KubeStellar on an OpenShift cluster flags="--force-internal-endpoint-lookup" clusters=(cluster1 cluster2); for cluster in "${clusters[@]}"; do kind create cluster --name ${cluster} kubectl config rename-context kind-${cluster} ${cluster} clusteradm --context its1 get token | grep '^clusteradm join' | sed "s/<cluster_name>/${cluster}/" | awk '{print $0 " --context '${cluster}' --singleton '${flags}'"}' | sh done
The
clusteradm
command grabs a token from the hub (its1
context), and constructs the command to apply the new cluster to be registered as a managed cluster on the OCM hub. -
Repeatedly issue the command:
until you see that the certificate signing requests (CSR) for both cluster1 and cluster2 exist. Note that the CSRs condition is supposed to be
Pending
until you approve them in step 4. -
Once the CSRs are created, approve the CSRs complete the cluster registration with the command:
-
Check the new clusters are in the OCM inventory and label them:
Exercise KubeStellar#
Proceed to Scenario 1 (multi-cluster workload deployment with kubectl) in the example scenarios after defining the shell variables that characterize the setup done above. Following are setting for those variables, whose meanings are defined at the start of the example scenarios document.
host_context=kind-kubeflex
its_cp=its1
its_context=its1
wds_cp=wds1
wds_context=wds1
wec1_name=cluster1
wec2_name=cluster2
wec1_context=$wec1_name
wec2_context=$wec2_name
label_query_both=location-group=edge
label_query_one=name=cluster1
Troubleshooting#
In the event something goes wrong, check out the troubleshooting page to see if someone else has experienced the same thing