Skip to content

Detailed QuickStart

QuickStart test   

Demo Video#

Watch this video to see a step-by-step demo of KubeStellar running and then follow the instructions below to get your own KubeStellar started quickly.

Estimated time to complete this example:

~4 minutes (after installing prerequisites)

Setup Instructions#

Table of contents:

  1. Check Required Packages
  2. Install and run kcp and KubeStellar
  3. Example deployment of Apache HTTP Server workload into two local kind clusters
    1. Stand up two kind clusters: florin and guilder
    2. Onboarding the clusters
    3. Create and deploy the Apache Server workload into florin and guilder clusters
  4. Teardown the environment
  5. Next Steps

This guide is intended to show how to (1) quickly bring up a KubeStellar environment with its dependencies from a binary release and then (2) run through a simple example usage.

1. Check Required Packages#

Required Packages for running and using KubeStellar:

You will need the following tools to deploy and use KubeStellar. Select the tab for your environment for suggested commands to install them

  • curl (omitted from most OS-specific instructions)

  • jq

  • yq

  • kubectl (version range expected: 1.23-1.25)

  • helm (required when deploying as workload)

If you intend to build kubestellar from source you will also need

  • go (Go version >=1.19 required; 1.19 recommended) [go releases] (https://go.dev/dl)

  • for simplicity, here's a direct link to go releases Remember you need go v1.19 or greater; 1.19 recommended!

jq - https://stedolan.github.io/jq/download/
brew install jq
yq - https://github.com/mikefarah/yq#install
brew install yq
kubectl - https://kubernetes.io/docs/tasks/tools/ (version range expected: 1.23-1.25)
brew install kubectl
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
brew install helm
go (only required if you build kubestellar from source)

  1. Download the package from https://go.dev/dl#go1.19 Be sure to get the correct one for your architecture

  2. Open the package file you downloaded and follow the prompts to install Go. The package installs the Go distribution to /usr/local/go. The package should put the /usr/local/go/bin directory in your PATH environment variable. You may need to restart any open Terminal sessions for the change to take effect.

  3. Verify that you've installed Go by opening a command prompt and typing the following command: $ go version Confirm that the command prints the desired installed version of Go.

jq - https://stedolan.github.io/jq/download/
sudo apt-get install jq
yq - https://github.com/mikefarah/yq#install
sudo snap install yq
kubectl - https://kubernetes.io/docs/tasks/tools/ (version range expected: 1.23-1.25)
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/$(dpkg --print-architecture)/kubectl" && chmod +x kubectl && sudo mv ./kubectl /usr/local/bin/kubectl
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
go (only required if you build kubestellar from source)

visit https://go.dev/doc/install for latest instructions

  1. Remove any previous Go installation by deleting the /usr/local/go folder (if it exists), then extract the archive you just downloaded into /usr/local, creating a fresh Go tree in /usr/local/go:

    $ rm -rf /usr/local/go && tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz

    (You may need to run the command as root or through sudo).

    Do not untar the archive into an existing /usr/local/go tree. This is known to produce broken Go installations.

  2. Add /usr/local/go/bin to the PATH environment variable. You can do this by adding the following line to your $HOME/.profile or /etc/profile (for a system-wide installation):

    export PATH=$PATH:/usr/local/go/bin

    Note: Changes made to a profile file may not apply until the next time you log into your computer. To apply the changes immediately, just run the shell commands directly or execute them from the profile using a command such as source $HOME/.profile.

  3. Verify that you've installed Go by opening a command prompt and typing the following command:

    $ go version

  4. Confirm that the command prints the installed version of Go.

jq - https://stedolan.github.io/jq/download/
sudo apt-get install jq
yq - https://github.com/mikefarah/yq#install
sudo apt-get install yq
kubectl - https://kubernetes.io/docs/tasks/tools/ (version range expected: 1.23-1.25)
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/$(dpkg --print-architecture)/kubectl" && chmod +x kubectl && sudo mv ./kubectl /usr/local/bin/kubectl
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
go (only required if you build kubestellar from source)

visit https://go.dev/doc/install for latest instructions

  1. Remove any previous Go installation by deleting the /usr/local/go folder (if it exists), then extract the archive you just downloaded into /usr/local, creating a fresh Go tree in /usr/local/go:

    $ rm -rf /usr/local/go && tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz

    (You may need to run the command as root or through sudo).

    Do not untar the archive into an existing /usr/local/go tree. This is known to produce broken Go installations.

  2. Add /usr/local/go/bin to the PATH environment variable. You can do this by adding the following line to your $HOME/.profile or /etc/profile (for a system-wide installation):

    export PATH=$PATH:/usr/local/go/bin

    Note: Changes made to a profile file may not apply until the next time you log into your computer. To apply the changes immediately, just run the shell commands directly or execute them from the profile using a command such as source $HOME/.profile.

  3. Verify that you've installed Go by opening a command prompt and typing the following command:

    $ go version

  4. Confirm that the command prints the installed version of Go.

jq - https://stedolan.github.io/jq/download/
yum -y install jq
yq - https://github.com/mikefarah/yq#install
# easiest to install with snap
snap install yq
kubectl - https://kubernetes.io/docs/tasks/tools/ (version range expected: 1.23-1.25)
# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" && chmod +x kubectl && mv ./kubectl /usr/local/bin/kubectl
# for ARM64 / aarch64
[ $(uname -m) = aarch64 ] && curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl" && chmod +x kubectl && mv ./kubectl /usr/local/bin/kubectl
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
dnf install helm
go (only required if you build kubestellar from source)

visit https://go.dev/doc/install for latest instructions

  1. Remove any previous Go installation by deleting the /usr/local/go folder (if it exists), then extract the archive you just downloaded into /usr/local, creating a fresh Go tree in /usr/local/go:

    $ rm -rf /usr/local/go && tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz

    (You may need to run the command as root or through sudo).

    Do not untar the archive into an existing /usr/local/go tree. This is known to produce broken Go installations.

  2. Add /usr/local/go/bin to the PATH environment variable. You can do this by adding the following line to your $HOME/.profile or /etc/profile (for a system-wide installation):

    export PATH=$PATH:/usr/local/go/bin

    Note: Changes made to a profile file may not apply until the next time you log into your computer. To apply the changes immediately, just run the shell commands directly or execute them from the profile using a command such as source $HOME/.profile.

  3. Verify that you've installed Go by opening a command prompt and typing the following command:

    $ go version

  4. Confirm that the command prints the installed version of Go.

Chocolatey - https://chocolatey.org/install#individual
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
curl
choco install curl -y
jq - https://stedolan.github.io/jq/download/
choco install jq -y
yq - https://github.com/mikefarah/yq#install
choco install yq -y
kubectl - https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/ (version range expected: 1.23-1.25)
curl.exe -LO "https://dl.k8s.io/release/v1.27.2/bin/windows/amd64/kubectl.exe"    
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
choco install kubernetes-helm
go (only required if you build kubestellar from source)
visit https://go.dev/doc/install for latest instructions

  1. Download the go 1.19 MSI package from https://go.dev/dl#go1.19 Be sure to get the correct one for your architecture

  2. Open the MSI file you downloaded and follow the prompts to install Go.

    By default, the installer will install Go to Program Files or Program Files (x86). You can change the location as needed. After installing, you will need to close and reopen any open command prompts so that changes to the environment made by the installer are reflected at the command prompt.

  3. Verify that you've installed Go:

    1. In Windows, click the Start menu.

    2. In the menu's search box, type cmd, then press the Enter key.

    3. In the Command Prompt window that appears, type the following command: $ go version

    4. Confirm that the command prints the installed version of Go.

How to install pre-requisites for a Windows Subsystem for Linux (WSL) envronment using an Ubuntu 22.04.01 distribution

(Tested on a Intel(R) Core(TM) i7-9850H CPU @ 2.60GHz 2.59 GHz with 32GB RAM, a 64-bit operating system, x64-based processor Using Windows 11 Enterprise)

1. If you're using a VPN, turn it off

2. Install Ubuntu into WSL

2.0 If wsl is not yet installed, open a powershell administrator window and run the following
wsl --install
2.1 reboot your system

2.2 In a Windows command terminal run the following to list all the linux distributions that are available online
wsl -l -o
2.3 Select a linux distribution and install it into WSL
wsl --install -d Ubuntu 22.04.01
You will see something like:
Installing, this may take a few minutes...
Please create a default UNIX user account. The username does not need to match your Windows username.
For more information visit: https://aka.ms/wslusers
Enter new UNIX username:

2.4 Enter your new username and password at the prompts, and you will eventually see something like:
Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.10.102.1-microsoft-standard-WSL2 x86_64)

2.5 Click on the Windows "Start" icon and type in the name of your distribution into the search box. Your new linux distribution should appear as a local "App". You can pin it to the Windows task bar or to Start for your future convenience.
Start a VM using your distribution by clicking on the App.

3. Install pre-requisites into your new VM
3.1 update and apply apt-get packages
sudo apt-get update
sudo apt-get upgrade

3.2 Install golang
wget https://golang.org/dl/go1.19.linux-amd64.tar.gz
sudo tar -zxvf go1.19.linux-amd64.tar.gz -C /usr/local
echo export GOROOT=/usr/local/go | sudo tee -a /etc/profile
echo export PATH="$PATH:/usr/local/go/bin" | sudo tee -a /etc/profile
source /etc/profile
go version

3.3 Install ko (but don't do ko set action step)
go install github.com/google/ko@latest

3.4 Install gcc
Either run this:
sudo apt install build-essential
or this:
sudo apt-get update
apt install gcc
gcc --version

3.5 Install make (if you installed build-essential this may already be installed)
apt install make

3.6 Install jq
DEBIAN_FRONTEND=noninteractive apt-get install -y jq
jq --version

3.7 install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256)  kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
3.8 install helm (required when deploying as workload)
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

Required Packages for the example usage:

You will need the following tools for the example usage of KubeStellar in this quickstart example. Select the tab for your environment for suggested commands to install them

docker - https://docs.docker.com/engine/install/
brew install docker
open -a Docker
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
brew install kind

docker - https://docs.docker.com/engine/install/
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
Enable rootless usage of Docker (requires relogin) - https://docs.docker.com/engine/security/rootless/
sudo apt-get install -y dbus-user-session # *** Relogin after this
sudo apt-get install -y uidmap
dockerd-rootless-setuptool.sh install
systemctl --user restart docker.service
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-$(dpkg --print-architecture) && chmod +x ./kind && sudo mv ./kind /usr/local/bin

docker - https://docs.docker.com/engine/install/
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

# Add the repository to Apt sources:
echo \
  "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

# Install packages
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Enable rootless usage of Docker (requires relogin) - https://docs.docker.com/engine/security/rootless/
sudo apt-get install -y dbus-user-session # *** Relogin after this
sudo apt-get install -y fuse-overlayfs
sudo apt-get install -y slirp4netns
dockerd-rootless-setuptool.sh install
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-linux-$(dpkg --print-architecture) && chmod +x ./kind && sudo mv ./kind /usr/local/bin

docker - https://docs.docker.com/engine/install/
yum -y install epel-release && yum -y install docker && systemctl enable --now docker && systemctl status docker
Enable rootless usage of Docker by following the instructions at https://docs.docker.com/engine/security/rootless/
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-linux-amd64
# For ARM64
[ $(uname -m) = aarch64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-linux-arm64 
chmod +x ./kind && sudo mv ./kind /usr/local/bin/kind

docker - https://docs.docker.com/engine/install/
choco install docker -y
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.14.0/kind-windows-amd64

How to install docker and kind into a Windows Subsystem for Linux (WSL) environment using an Ubuntu 22.04.01 distribution

1.0 Start a VM terminal by clicking on the App you configured using the instructions in the General pre-requisites described above.

2.0 Install docker
The installation instructions from docker are not sufficient to get docker working with WSL

2.1 Follow instructions here to install docker https://docs.docker.com/engine/install/ubuntu/

Here some additional steps you will need to take:

2.2 Ensure that /etc/wsl.conf is configured so that systemd will run on booting.
If /etc/wsl.conf does not contain [boot] systemd=true, then edit /etc/wsl.com as follows:
sudo vi /etc/wsl.conf
Insert
[boot]
systemd=true

2.3 Edit /etc/sudoers: it is strongly recommended to not add directives directly to /etc/sudoers, but instead to put them in files in /etc/sudoers.d which are auto-included. So make/modify a new file via
sudo vi /etc/sudoers.d/docker
Insert
# Docker daemon specification
<your user account> ALL=(ALL) NOPASSWD: /usr/bin/dockerd

2.4 Add your user to the docker group
sudo usermod -aG docker $USER

2.5 If dockerd is already running, then stop it and restart it as follows (note: the new dockerd instance will be running in the foreground):
sudo systemctl stop docker
sudo dockerd &

2.5.1 If you encounter an iptables issue, which is described here: https://github.com/microsoft/WSL/issues/6655 The following commands will fix the issue:
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
sudo dockerd & 

3. You will now need to open new terminals to access the VM since dockerd is running in the foreground of this terminal

3.1 In your new terminal, install kind
wget -nv https://github.com/kubernetes-sigs/kind/releases/download/v0.17.0/kind-linux-$(dpkg --print-architecture) -O kind 
sudo install -m 0755 kind /usr/local/bin/kind 
rm kind 
kind version

2. Install and run kcp and KUBESTELLAR#

KubeStellar works in the context of kcp, so to use KubeStellar you also need kcp.

KubeStellar works with release v0.11.0 of kcp.

We support two ways to deploy kcp and KubeStellar. The older way is to run them as bare processes. The newer way is to deploy them as workload in a Kubernetes (possibly OpenShift) cluster.

Deploy kcp and KubeStellar as bare processes#

The following commands will download the kcp and KubeStellar executables into subdirectories of your current working directory, deploy (i.e., start and configure) kcp and KubeStellar as bare processes, and configure your shell to use kcp and KubeStellar. If you want to suppress the deployment part then add --deploy false to the first command's flags (e.g., after the specification of the KubeStellar version); for the deployment-only part, once the executable have been fetched, see the documentation about the commands for bare process deployment.

bash <(curl -s https://raw.githubusercontent.com/kubestellar/kubestellar/release-0.14/bootstrap/bootstrap-kubestellar.sh) --kubestellar-version v0.14.0
export PATH="$PATH:$(pwd)/kcp/bin:$(pwd)/kubestellar/bin"
export KUBECONFIG="$(pwd)/.kcp/admin.kubeconfig"

Check that KubeStellar is running.

First, check that controllers are running with the following command:

ps aux | grep -e mailbox-controller -e placement-translator -e kubestellar-where-resolver

which should yield something like:

user     1892  0.0  0.3 747644 29628 pts/1    Sl   10:51   0:00 mailbox-controller -v=2
user     1902  0.3  0.3 743652 27504 pts/1    Sl   10:51   0:02 kubestellar-where-resolver -v 2
user     1912  0.3  0.5 760428 41660 pts/1    Sl   10:51   0:02 placement-translator -v=2

Second, check that TMC compute service provider workspace and the KubeStellar Edge Service Provider Workspace (espw) have been created with the following command:

kubectl ws tree

which should yield:

.
└── root
    ├── compute
    ├── espw
    ├── imw1
    └── wmw1

Deploy kcp and KubeStellar as Kubernetes workload#

This requires a KubeStellar release GREATER THAN v0.5.0.

This example uses a total of three kind clusters, which tends to run into a known issue with a known work-around, so take care of that.

Before you can deploy kcp and KubeStellar as workload in a Kubernetes cluster, you need a Kubernetes cluster and it needs to have an Ingress controller installed. We use the term "hosting cluster" for the cluster that plays this role. In this quickstart, we make such a cluster with kind. Follow the developer directions for making a hosting cluster with kind; you need not worry about loading a locally built container image into that cluster.

This example uses the domain name "hostname.favorite.my" for the machine where you invoked kind create cluster. If you have not already done so then issue the following command, replacing a_good_IP_address_for_this_machine with an IPv4 address for your machine that can be reached from inside a container or VM (i.e., not 127.0.0.1).

sudo sh -c "echo a_good_IP_address_for_this_machine hostname.favorite.my >> /etc/hosts"

The next command relies on kubectl already being configured to manipulate the hosting cluster, which is indeed the state that kind create cluster leaves it in.

The following commands will (a) download the kcp and KubeStellar executables into subdirectories of your current working directory and (b) deploy (i.e., start and configure) kcp and KubeStellar as workload in the hosting cluster. If you want to suppress the deployment part then add --deploy false to the first command's flags (e.g., after the specification of the KubeStellar version); for the deployment-only part, once the executable have been fetched, see the documentation for the commands about deployment into a Kubernetes cluster.

bash <(curl -s https://raw.githubusercontent.com/kubestellar/kubestellar/release-0.14/bootstrap/bootstrap-kubestellar.sh) --kubestellar-version v0.14.0 --external-endpoint hostname.favorite.my:1119
export PATH="$PATH:$(pwd)/kcp/bin:$(pwd)/kubestellar/bin"

Using your original kubectl configuration that manipulates the hosting cluster, check that the KubeStellar Deployment has its intended one running Pod.

kubectl get deployments -n kubestellar

which should yield something like:

NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
kubestellar-server   1/1     1            1           2m42s

It may take some time for that Pod to reach Running state.

The bootstrap command above will print out instructions to set your KUBECONFIG environment variable to the pathname of a kubeconfig file that you can use as a user of kcp and KubeStellar. Do that now, for the benefit of the remaining commands in this example. It will look something like the following command.

export KUBECONFIG="$(pwd)/kubestellar.kubeconfig"

Check that the TMC compute service provider workspace and the KubeStellar Edge Service Provider Workspace (espw) have been created with the following command:

kubectl ws tree

which should yield:

.
└── root
    ├── compute
    ├── espw
    ├── imw1
    └── wmw1

3. Example deployment of Apache HTTP Server workload into two local kind clusters#

In this example you will create two edge clusters and define one workload that will be distributed from the center to those edge clusters. This example is similar to the one described more expansively on the website, but with the some steps reorganized and combined and the special workload and summarization aspirations removed.

a. Stand up two kind clusters: florin and guilder#

Create the first edge cluster:

kind create cluster --name florin --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 8081
    hostPort: 8094
EOF

Note: if you already have a cluster named 'florin' from a previous exercise of KubeStellar, please delete the florin cluster ('kind delete cluster --name florin') and create it using the instruction above.

Create the second edge cluster:

kind create cluster --name guilder --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 8081
    hostPort: 8096
  - containerPort: 8082
    hostPort: 8097
EOF

Note: if you already have a cluster named 'guilder' from a previous exercise of KubeStellar, please delete the guilder cluster ('kind delete cluster --name guilder') and create it using the instruction above.

b. Onboarding the clusters#

The above use of kind has knocked kcp's kubectl ws plugin off kilter, as the latter uses the local kubeconfig to store its state about the "current" and "previous" workspaces. Get it back on track with the following command.

kubectl config use-context root

KubeStellar will have created an Inventory Management Workspace (IMW) for the user to put inventory objects in, describing the user's clusters. The IMW that is automatically created for the user is at root:imw1.

Let's begin by onboarding the florin cluster:

kubectl ws root
kubectl kubestellar prep-for-cluster --imw root:imw1 florin env=prod

which should yield something like:

Current workspace is "root:imw1".
synctarget.edge.kubestellar.io/florin created
location.edge.kubestellar.io/florin created
synctarget.edge.kubestellar.io/florin labeled
location.edge.kubestellar.io/florin labeled
Current workspace is "root:imw1".
Current workspace is "root:espw".
Current workspace is "root".
Creating service account "kubestellar-syncer-florin-1yi5q9c4"
Creating cluster role "kubestellar-syncer-florin-1yi5q9c4" to give service account "kubestellar-syncer-florin-1yi5q9c4"

 1. write and sync access to the synctarget "kubestellar-syncer-florin-1yi5q9c4"
 2. write access to apiresourceimports.

Creating or updating cluster role binding "kubestellar-syncer-florin-1yi5q9c4" to bind service account "kubestellar-syncer-florin-1yi5q9c4" to cluster role "kubestellar-syncer-florin-1yi5q9c4".

Wrote WEC manifest to florin-syncer.yaml for namespace "kubestellar-syncer-florin-1yi5q9c4". Use

  KUBECONFIG=<workload-execution-cluster-config> kubectl apply -f "florin-syncer.yaml"

to apply it. Use

  KUBECONFIG=<workload-execution-cluster-config> kubectl get deployment -n "kubestellar-syncer-florin-1yi5q9c4" kubestellar-syncer-florin-1yi5q9c4

to verify the syncer pod is running.
Current workspace is "root:imw1".
Current workspace is "root".

An edge syncer manifest yaml file was created in your current directory: florin-syncer.yaml. The default for the output file is the name of the SyncTarget object with “-syncer.yaml” appended.

Now let's deploy the edge syncer to the florin edge cluster:

kubectl --context kind-florin apply -f florin-syncer.yaml

which should yield something like:

namespace/kubestellar-syncer-florin-1yi5q9c4 created
serviceaccount/kubestellar-syncer-florin-1yi5q9c4 created
secret/kubestellar-syncer-florin-1yi5q9c4-token created
clusterrole.rbac.authorization.k8s.io/kubestellar-syncer-florin-1yi5q9c4 created
clusterrolebinding.rbac.authorization.k8s.io/kubestellar-syncer-florin-1yi5q9c4 created
secret/kubestellar-syncer-florin-1yi5q9c4 created
deployment.apps/kubestellar-syncer-florin-1yi5q9c4 created

Optionally, check that the edge syncer pod is running:

kubectl --context kind-florin get pods -A

which should yield something like:

NAMESPACE                            NAME                                                  READY   STATUS    RESTARTS   AGE
kubestellar-syncer-florin-1yi5q9c4   kubestellar-syncer-florin-1yi5q9c4-77cb588c89-xc5qr   1/1     Running   0          12m
kube-system                          coredns-565d847f94-92f4k                              1/1     Running   0          58m
kube-system                          coredns-565d847f94-kgddm                              1/1     Running   0          58m
kube-system                          etcd-florin-control-plane                             1/1     Running   0          58m
kube-system                          kindnet-p8vkv                                         1/1     Running   0          58m
kube-system                          kube-apiserver-florin-control-plane                   1/1     Running   0          58m
kube-system                          kube-controller-manager-florin-control-plane          1/1     Running   0          58m
kube-system                          kube-proxy-jmxsg                                      1/1     Running   0          58m
kube-system                          kube-scheduler-florin-control-plane                   1/1     Running   0          58m
local-path-storage                   local-path-provisioner-684f458cdd-kw2xz               1/1     Running   0          58m

Now, let's onboard the guilder cluster:

kubectl ws root
kubectl kubestellar prep-for-cluster --imw root:imw1 guilder env=prod extended=yes

Apply the created edge syncer manifest:

kubectl --context kind-guilder apply -f guilder-syncer.yaml

c. Create and deploy the Apache Server workload into florin and guilder clusters#

KubeStellar will have automatically created a Workload Management Workspace (WMW) for the user to store workload descriptions and KubeStellar Core control objects in. The automatically created WMW is at root:wmw1.

Create the EdgePlacement object for your workload. Its “where predicate” (the locationSelectors array) has one label selector that matches the Location objects (florin and guilder) created earlier, thus directing the workload to both edge clusters. The upsync field is only a demonstration of the syntax, it plays no functional role in this scenario.

In the root:wmw1 workspace create the following EdgePlacement object:

kubectl ws root:wmw1

kubectl apply -f - <<EOF
apiVersion: edge.kubestellar.io/v2alpha1
kind: EdgePlacement
metadata:
  name: edge-placement-c
spec:
  locationSelectors:
  - matchLabels: {"env":"prod"}
  downsync:
  - apiGroup: ""
    resources: [ configmaps ]
    namespaces: [ commonstuff ]
    objectNames: [ "*" ]
  - apiGroup: apps
    resources: [ deployments ]
    namespaceSelectors:
    - matchLabels: {common: "yes"}
    objectNames: [ commond ]
  - apiGroup: apis.kcp.io
    resources: [ apibindings ]
    objectNames: [ "bind-kubernetes", "bind-apps" ]
  upsync:
  - apiGroup: "group1.test"
    resources: ["sprockets", "flanges"]
    namespaces: ["orbital"]
    names: ["george", "cosmo"]
  - apiGroup: "group2.test"
    resources: ["cogs"]
    names: ["william"]
EOF

Put the prescription of the HTTP server workload into the WMW. Note the namespace label matches the label in the namespaceSelector for the EdgePlacement (edge-placement-c) object created above.

kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: commonstuff
  labels: {common: "yes"}
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: commonstuff
  name: httpd-htdocs
data:
  index.html: |
    <!DOCTYPE html>
    <html>
      <body>
        This is a common web site.
      </body>
    </html>
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: commonstuff
  name: commond
spec:
  selector: {matchLabels: {app: common} }
  template:
    metadata:
      labels: {app: common}
    spec:
      containers:
      - name: httpd
        image: library/httpd:2.4
        ports:
        - name: http
          containerPort: 80
          hostPort: 8081
          protocol: TCP
        volumeMounts:
        - name: htdocs
          readOnly: true
          mountPath: /usr/local/apache2/htdocs
      volumes:
      - name: htdocs
        configMap:
          name: httpd-htdocs
          optional: false
EOF

Now, let's check that the deployment was created in the florin edge cluster - it may take a few 10s of seconds to appear:

kubectl --context kind-florin get deployments -A

which should yield something like:

NAMESPACE                            NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
commonstuff                          commond                              1/1     1            1           6m48s
kubestellar-syncer-florin-2upj1awn   kubestellar-syncer-florin-2upj1awn   1/1     1            1           16m
kube-system                          coredns                              2/2     2            2           28m
local-path-storage                   local-path-provisioner               1/1     1            1           28m

Also, let's check that the deployment was created in the guilder edge cluster:

kubectl --context kind-guilder get deployments -A

which should yield something like:

NAMESPACE                             NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
commonstuff                           commond                               1/1     1            1           7m54s
kubestellar-syncer-guilder-6tuay5d6   kubestellar-syncer-guilder-6tuay5d6   1/1     1            1           12m
kube-system                           coredns                               2/2     2            2           27m
local-path-storage                    local-path-provisioner                1/1     1            1           27m

Lastly, let's check that the workload is working in both clusters:

For florin:

while [[ $(kubectl --context kind-florin get pod -l "app=common" -n commonstuff -o jsonpath='{.items[0].status.phase}') != "Running" ]]; do sleep 5; done;curl http://localhost:8094

which should eventually yield:

<!DOCTYPE html>
<html>
  <body>
    This is a common web site.
  </body>
</html>

For guilder:

while [[ $(kubectl --context kind-guilder get pod -l "app=common" -n commonstuff -o jsonpath='{.items[0].status.phase}') != "Running" ]]; do sleep 5; done;curl http://localhost:8096
which should eventually yield:

<!DOCTYPE html>
<html>
  <body>
    This is a common web site.
  </body>
</html>

Congratulations, you’ve just deployed a workload to two edge clusters using kubestellar! To learn more about kubestellar please visit our User Guide

4. Teardown the environment#

To remove the example usage, delete the IMW and WMW and kind clusters run the following commands:

rm florin-syncer.yaml guilder-syncer.yaml || true
kubectl ws root
kubectl delete workspace example-imw
kubectl kubestellar remove wmw example-wmw
kind delete cluster --name florin
kind delete cluster --name guilder

Teardown of KubeStellar depends on which style of deployment was used.

Teardown bare processes#

The following command will stop whatever KubeStellar controllers are running.

kubestellar stop

Stop and uninstall KubeStellar and kcp with the following command:

remove-kubestellar

Teardown Kubernetes workload#

With kubectl configured to manipulate the hosting cluster, the following command will remove the workload that is kcp and KubeStellar.

helm delete kubestellar

5. Next Steps#

What you just did is a shortened version of the more detailed example on the website, but with the some steps reorganized and combined and the special workload and summarization aspiration removed. You can continue from here, learning more details about what you did in the QuickStart, and adding on some more steps for the special workload.