Skip to content

Extended Example

docs-ecutable - example1

Required Packages for running and using KubeStellar:

You will need the following tools to deploy and use KubeStellar. Select the tab for your environment for suggested commands to install them

  • curl (omitted from most OS-specific instructions)

  • jq

  • yq

  • kubectl (version range expected: 1.23-1.25)

  • helm (required when deploying as workload)

If you intend to build kubestellar from source you will also need

  • go (Go version >=1.19 required; 1.19 recommended) [go releases] (https://go.dev/dl)

  • for simplicity, here's a direct link to go releases Remember you need go v1.19 or greater; 1.19 recommended!

jq - https://stedolan.github.io/jq/download/
brew install jq
yq - https://github.com/mikefarah/yq#install
brew install yq
kubectl - https://kubernetes.io/docs/tasks/tools/ (version range expected: 1.23-1.25)
brew install kubectl
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
brew install helm
go (only required if you build kubestellar from source)

  1. Download the package from https://go.dev/dl#go1.19 Be sure to get the correct one for your architecture

  2. Open the package file you downloaded and follow the prompts to install Go. The package installs the Go distribution to /usr/local/go. The package should put the /usr/local/go/bin directory in your PATH environment variable. You may need to restart any open Terminal sessions for the change to take effect.

  3. Verify that you've installed Go by opening a command prompt and typing the following command: $ go version Confirm that the command prints the desired installed version of Go.

jq - https://stedolan.github.io/jq/download/
sudo apt-get install jq
yq - https://github.com/mikefarah/yq#install
sudo snap install yq
kubectl - https://kubernetes.io/docs/tasks/tools/ (version range expected: 1.23-1.25)
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/$(dpkg --print-architecture)/kubectl" && chmod +x kubectl && sudo mv ./kubectl /usr/local/bin/kubectl
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
go (only required if you build kubestellar from source)

visit https://go.dev/doc/install for latest instructions

  1. Remove any previous Go installation by deleting the /usr/local/go folder (if it exists), then extract the archive you just downloaded into /usr/local, creating a fresh Go tree in /usr/local/go:

    $ rm -rf /usr/local/go && tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz

    (You may need to run the command as root or through sudo).

    Do not untar the archive into an existing /usr/local/go tree. This is known to produce broken Go installations.

  2. Add /usr/local/go/bin to the PATH environment variable. You can do this by adding the following line to your $HOME/.profile or /etc/profile (for a system-wide installation):

    export PATH=$PATH:/usr/local/go/bin

    Note: Changes made to a profile file may not apply until the next time you log into your computer. To apply the changes immediately, just run the shell commands directly or execute them from the profile using a command such as source $HOME/.profile.

  3. Verify that you've installed Go by opening a command prompt and typing the following command:

    $ go version

  4. Confirm that the command prints the installed version of Go.

jq - https://stedolan.github.io/jq/download/
sudo apt-get install jq
yq - https://github.com/mikefarah/yq#install
sudo apt-get install yq
kubectl - https://kubernetes.io/docs/tasks/tools/ (version range expected: 1.23-1.25)
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/$(dpkg --print-architecture)/kubectl" && chmod +x kubectl && sudo mv ./kubectl /usr/local/bin/kubectl
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
go (only required if you build kubestellar from source)

visit https://go.dev/doc/install for latest instructions

  1. Remove any previous Go installation by deleting the /usr/local/go folder (if it exists), then extract the archive you just downloaded into /usr/local, creating a fresh Go tree in /usr/local/go:

    $ rm -rf /usr/local/go && tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz

    (You may need to run the command as root or through sudo).

    Do not untar the archive into an existing /usr/local/go tree. This is known to produce broken Go installations.

  2. Add /usr/local/go/bin to the PATH environment variable. You can do this by adding the following line to your $HOME/.profile or /etc/profile (for a system-wide installation):

    export PATH=$PATH:/usr/local/go/bin

    Note: Changes made to a profile file may not apply until the next time you log into your computer. To apply the changes immediately, just run the shell commands directly or execute them from the profile using a command such as source $HOME/.profile.

  3. Verify that you've installed Go by opening a command prompt and typing the following command:

    $ go version

  4. Confirm that the command prints the installed version of Go.

jq - https://stedolan.github.io/jq/download/
yum -y install jq
yq - https://github.com/mikefarah/yq#install
# easiest to install with snap
snap install yq
kubectl - https://kubernetes.io/docs/tasks/tools/ (version range expected: 1.23-1.25)
# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" && chmod +x kubectl && mv ./kubectl /usr/local/bin/kubectl
# for ARM64 / aarch64
[ $(uname -m) = aarch64 ] && curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl" && chmod +x kubectl && mv ./kubectl /usr/local/bin/kubectl
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
dnf install helm
go (only required if you build kubestellar from source)

visit https://go.dev/doc/install for latest instructions

  1. Remove any previous Go installation by deleting the /usr/local/go folder (if it exists), then extract the archive you just downloaded into /usr/local, creating a fresh Go tree in /usr/local/go:

    $ rm -rf /usr/local/go && tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz

    (You may need to run the command as root or through sudo).

    Do not untar the archive into an existing /usr/local/go tree. This is known to produce broken Go installations.

  2. Add /usr/local/go/bin to the PATH environment variable. You can do this by adding the following line to your $HOME/.profile or /etc/profile (for a system-wide installation):

    export PATH=$PATH:/usr/local/go/bin

    Note: Changes made to a profile file may not apply until the next time you log into your computer. To apply the changes immediately, just run the shell commands directly or execute them from the profile using a command such as source $HOME/.profile.

  3. Verify that you've installed Go by opening a command prompt and typing the following command:

    $ go version

  4. Confirm that the command prints the installed version of Go.

Chocolatey - https://chocolatey.org/install#individual
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
curl
choco install curl -y
jq - https://stedolan.github.io/jq/download/
choco install jq -y
yq - https://github.com/mikefarah/yq#install
choco install yq -y
kubectl - https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/ (version range expected: 1.23-1.25)
curl.exe -LO "https://dl.k8s.io/release/v1.27.2/bin/windows/amd64/kubectl.exe"    
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
choco install kubernetes-helm
go (only required if you build kubestellar from source)
visit https://go.dev/doc/install for latest instructions

  1. Download the go 1.19 MSI package from https://go.dev/dl#go1.19 Be sure to get the correct one for your architecture

  2. Open the MSI file you downloaded and follow the prompts to install Go.

    By default, the installer will install Go to Program Files or Program Files (x86). You can change the location as needed. After installing, you will need to close and reopen any open command prompts so that changes to the environment made by the installer are reflected at the command prompt.

  3. Verify that you've installed Go:

    1. In Windows, click the Start menu.

    2. In the menu's search box, type cmd, then press the Enter key.

    3. In the Command Prompt window that appears, type the following command: $ go version

    4. Confirm that the command prints the installed version of Go.

How to install pre-requisites for a Windows Subsystem for Linux (WSL) envronment using an Ubuntu 22.04.01 distribution

(Tested on a Intel(R) Core(TM) i7-9850H CPU @ 2.60GHz 2.59 GHz with 32GB RAM, a 64-bit operating system, x64-based processor Using Windows 11 Enterprise)

1. If you're using a VPN, turn it off

2. Install Ubuntu into WSL

2.0 If wsl is not yet installed, open a powershell administrator window and run the following
wsl --install
2.1 reboot your system

2.2 In a Windows command terminal run the following to list all the linux distributions that are available online
wsl -l -o
2.3 Select a linux distribution and install it into WSL
wsl --install -d Ubuntu 22.04.01
You will see something like:
Installing, this may take a few minutes...
Please create a default UNIX user account. The username does not need to match your Windows username.
For more information visit: https://aka.ms/wslusers
Enter new UNIX username:

2.4 Enter your new username and password at the prompts, and you will eventually see something like:
Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.10.102.1-microsoft-standard-WSL2 x86_64)

2.5 Click on the Windows "Start" icon and type in the name of your distribution into the search box. Your new linux distribution should appear as a local "App". You can pin it to the Windows task bar or to Start for your future convenience.
Start a VM using your distribution by clicking on the App.

3. Install pre-requisites into your new VM
3.1 update and apply apt-get packages
sudo apt-get update
sudo apt-get upgrade

3.2 Install golang
wget https://golang.org/dl/go1.19.linux-amd64.tar.gz
sudo tar -zxvf go1.19.linux-amd64.tar.gz -C /usr/local
echo export GOROOT=/usr/local/go | sudo tee -a /etc/profile
echo export PATH="$PATH:/usr/local/go/bin" | sudo tee -a /etc/profile
source /etc/profile
go version

3.3 Install ko (but don't do ko set action step)
go install github.com/google/ko@latest

3.4 Install gcc
Either run this:
sudo apt install build-essential
or this:
sudo apt-get update
apt install gcc
gcc --version

3.5 Install make (if you installed build-essential this may already be installed)
apt install make

3.6 Install jq
DEBIAN_FRONTEND=noninteractive apt-get install -y jq
jq --version

3.7 install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256)  kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
3.8 install helm (required when deploying as workload)
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

Required Packages for the example usage:

You will need the following tools for the example usage of KubeStellar in this quickstart example. Select the tab for your environment for suggested commands to install them

docker - https://docs.docker.com/engine/install/
brew install docker
open -a Docker
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
brew install kind

docker - https://docs.docker.com/engine/install/
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
Enable rootless usage of Docker (requires relogin) - https://docs.docker.com/engine/security/rootless/
sudo apt-get install -y dbus-user-session # *** Relogin after this
sudo apt-get install -y uidmap
dockerd-rootless-setuptool.sh install
systemctl --user restart docker.service
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-$(dpkg --print-architecture) && chmod +x ./kind && sudo mv ./kind /usr/local/bin

docker - https://docs.docker.com/engine/install/
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

# Add the repository to Apt sources:
echo \
  "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

# Install packages
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Enable rootless usage of Docker (requires relogin) - https://docs.docker.com/engine/security/rootless/
sudo apt-get install -y dbus-user-session # *** Relogin after this
sudo apt-get install -y fuse-overlayfs
sudo apt-get install -y slirp4netns
dockerd-rootless-setuptool.sh install
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-linux-$(dpkg --print-architecture) && chmod +x ./kind && sudo mv ./kind /usr/local/bin

docker - https://docs.docker.com/engine/install/
yum -y install epel-release && yum -y install docker && systemctl enable --now docker && systemctl status docker
Enable rootless usage of Docker by following the instructions at https://docs.docker.com/engine/security/rootless/
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-linux-amd64
# For ARM64
[ $(uname -m) = aarch64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-linux-arm64 
chmod +x ./kind && sudo mv ./kind /usr/local/bin/kind

docker - https://docs.docker.com/engine/install/
choco install docker -y
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.14.0/kind-windows-amd64

How to install docker and kind into a Windows Subsystem for Linux (WSL) environment using an Ubuntu 22.04.01 distribution

1.0 Start a VM terminal by clicking on the App you configured using the instructions in the General pre-requisites described above.

2.0 Install docker
The installation instructions from docker are not sufficient to get docker working with WSL

2.1 Follow instructions here to install docker https://docs.docker.com/engine/install/ubuntu/

Here some additional steps you will need to take:

2.2 Ensure that /etc/wsl.conf is configured so that systemd will run on booting.
If /etc/wsl.conf does not contain [boot] systemd=true, then edit /etc/wsl.com as follows:
sudo vi /etc/wsl.conf
Insert
[boot]
systemd=true

2.3 Edit /etc/sudoers: it is strongly recommended to not add directives directly to /etc/sudoers, but instead to put them in files in /etc/sudoers.d which are auto-included. So make/modify a new file via
sudo vi /etc/sudoers.d/docker
Insert
# Docker daemon specification
<your user account> ALL=(ALL) NOPASSWD: /usr/bin/dockerd

2.4 Add your user to the docker group
sudo usermod -aG docker $USER

2.5 If dockerd is already running, then stop it and restart it as follows (note: the new dockerd instance will be running in the foreground):
sudo systemctl stop docker
sudo dockerd &

2.5.1 If you encounter an iptables issue, which is described here: https://github.com/microsoft/WSL/issues/6655 The following commands will fix the issue:
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
sudo dockerd & 

3. You will now need to open new terminals to access the VM since dockerd is running in the foreground of this terminal

3.1 In your new terminal, install kind
wget -nv https://github.com/kubernetes-sigs/kind/releases/download/v0.17.0/kind-linux-$(dpkg --print-architecture) -O kind 
sudo install -m 0755 kind /usr/local/bin/kind 
rm kind 
kind version

This document is 'docs-ecutable' - you can 'run' this document, just like we do in our testing, on your local environment

git clone -b release-0.14 https://github.com/kubestellar/kubestellar
cd kubestellar
make MANIFEST="'docs/content/common-subs/pre-req.md','docs/content/Coding Milestones/PoC2023q1/example1.md'" docs-ecutable
# done? remove everything
make MANIFEST="docs/content/common-subs/remove-all.md" docs-ecutable
cd ..
rm -rf kubestellar

This doc shows a detailed example usage of the KubeStellar components.

This example involves two edge clusters and two workloads. One workload goes on both edge clusters and one workload goes on only one edge cluster. Nothing changes after the initial activity.

This example is presented in stages. The controllers involved are always maintaining relationships. This document focuses on changes as they appear in this example.

Stage 1#

Boxes and arrows. Two kind clusters exist, named florin and guilder. The Inventory Management workspace contains two pairs of SyncTarget and Location objects. The Edge Service Provider workspace contains the PoC controllers; the mailbox controller reads the SyncTarget objects and creates two mailbox workspaces.

Stage 1 creates the infrastructure and the edge service provider workspace (ESPW) and lets that react to the inventory. Then the KubeStellar syncers are deployed, in the edge clusters and configured to work with the corresponding mailbox workspaces. This stage has the following steps.

Create two kind clusters.#

This example uses two kind clusters as edge clusters. We will call them "florin" and "guilder".

This example uses extremely simple workloads, which use hostPort networking in Kubernetes. To make those ports easily reachable from your host, this example uses an explicit kind configuration for each edge cluster.

For the florin cluster, which will get only one workload, create a file named florin-config.yaml with the following contents. In a kind config file, containerPort is about the container that is also a host (a Kubernetes node), while the hostPort is about the host that hosts that container.

cat > florin-config.yaml << EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 8081
    hostPort: 8094
EOF

For the guilder cluster, which will get two workloads, create a file named guilder-config.yaml with the following contents. The workload that uses hostPort 8081 goes in both clusters, while the workload that uses hostPort 8082 goes only in the guilder cluster.

cat > guilder-config.yaml << EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 8081
    hostPort: 8096
  - containerPort: 8082
    hostPort: 8097
EOF

Finally, create the two clusters with the following two commands, paying attention to $KUBECONFIG and, if that's empty, ~/.kube/config: kind create will inject/replace the relevant "context" in your active kubeconfig.

kind create cluster --name florin --config florin-config.yaml
kind create cluster --name guilder --config guilder-config.yaml

Deploy kcp and KubeStellar#

You need kcp and KubeStellar and can deploy them in either of two ways: as bare processes on whatever host you are using to run this example, or as workload in a Kubernetes cluster (an OpenShift cluster qualifies). Do one or the other, not both.

KubeStellar only works with release v0.11.0 of kcp. To downsync ServiceAccount objects you will need a patched version of that in order to get the denaturing of them as discussed in the design outline.

Deploy kcp and KubeStellar as bare processes#

Start kcp#

The following commands fetch the appropriate kcp server and plugins for your OS and ISA and download them and put them on your $PATH.

rm -rf kcp
mkdir kcp
pushd kcp
(
  set -x
  case "$OSTYPE" in
      linux*)   os_type="linux" ;;
      darwin*)  os_type="darwin" ;;
      *)        echo "Unsupported operating system type: $OSTYPE" >&2
                false ;;
  esac
  case "$HOSTTYPE" in
      x86_64*)  arch_type="amd64" ;;
      aarch64*) arch_type="arm64" ;;
      arm64*)   arch_type="arm64" ;;
      *)        echo "Unsupported architecture type: $HOSTTYPE" >&2
                false ;;
  esac
  kcp_version=v0.11.0
  trap "rm kcp.tar.gz kcp-plugins.tar.gz" EXIT
  curl -SL -o kcp.tar.gz "https://github.com/kubestellar/kubestellar/releases/download/v0.12.0/kcp_0.11.0_${os_type}_${arch_type}.tar.gz"
  curl -SL -o kcp-plugins.tar.gz "https://github.com/kcp-dev/kcp/releases/download/${kcp_version}/kubectl-kcp-plugin_${kcp_version//v}_${os_type}_${arch_type}.tar.gz"
  tar -xzf kcp-plugins.tar.gz
  tar -xzf kcp.tar.gz
)
export PATH=$(pwd)/bin:$PATH

Running the kcp server creates a hidden subdirectory named .kcp to hold all sorts of state related to the server. If you have run it before and want to start over from scratch then you should rm -rf .kcp first.

Use the following commands to: (a) run the kcp server in a forked command, (b) update your KUBECONFIG environment variable to configure kubectl to use the kubeconfig produced by the kcp server, and (c) wait for the kcp server to get through some initialization. The choice of -v=3 for the kcp server makes it log a line for every HTTP request (among other things).

kcp start -v=3 &> /tmp/kcp.log &
export KUBECONFIG=$(pwd)/.kcp/admin.kubeconfig
popd
# wait until KCP is ready checking availability of ws resource
while ! kubectl ws tree &> /dev/null; do
  sleep 10
done

Note that you now care about two different kubeconfig files: the one that you were using earlier, which holds the contexts for your kind clusters, and the one that the kcp server creates. The remainder of this document assumes that your kind cluster contexts are in ~/.kube/config.

Get KubeStellar#

You will need a local copy of KubeStellar. You can either use the pre-built archive (containing executables and config files) from a release or get any desired version from GitHub and build.

Use pre-built archive#

Fetch the archive for your operating system and instruction set architecture as follows, in which $kubestellar_version is your chosen release of KubeStellar (see the releases on GitHub) and $os_type and $arch_type are chosen according to the list of "assets" for your chosen release.

curl -SL -o kubestellar.tar.gz "https://github.com/kubestellar/kubestellar/releases/download/${kubestellar_version}/kubestellar_${kubestellar_version}_${os_type}_${arch_type}.tar.gz
tar xzf kubestellar.tar.gz
export PATH=$PWD/bin:$PATH
Get from GitHub#

You can get the latest version from GitHub with the following command, which will get you the default branch (which is named "main"); add -b $branch to the git command in order to get a different branch.

git clone https://github.com/kubestellar/kubestellar
cd kubestellar

Use the following commands to build and add the executables to your $PATH.

make build
export PATH=$(pwd)/bin:$PATH

In the following exhibited command lines, the commands described as "KubeStellar commands" and the commands that start with kubectl kubestellar rely on the KubeStellar bin directory being on the $PATH. Alternatively you could invoke them with explicit pathnames. The kubectl plugin lines use fully specific executables (e.g., kubectl kubestellar prep-for-syncer corresponds to bin/kubectl-kubestellar-prep_for_syncer).

Initialize the KubeStellar platform as bare processes#

In this step KubeStellar creates and populates the Edge Service Provider Workspace (ESPW), which exports the KubeStellar API, and also augments the root:compute workspace from kcp TMC as needed here. That augmentation consists of adding authorization to update the relevant /status and /scale subresources (missing in kcp TMC) and extending the supported subset of the Kubernetes API for managing containerized workloads from the four resources built into kcp TMC (Deployment, Pod, Service, and Ingress) to the other ones that are meaningful in KubeStellar.

kubestellar init

Deploy kcp and KubeStellar as a workload in a Kubernetes cluster#

(This style of deployment requires release v0.6 or later of KubeStellar.)

You need a Kubernetes cluster; see the documentation for kubectl kubestellar deploy for more information.

You will need a domain name that, on each of your clients, resolves to an IP address that the client can use to open a TCP connection to the Ingress controller's listening socket.

You will need the kcp kubectl plugins. See the "Start kcp" section above for instructions on how to get all of the kcp executables.

You will need to get a build of KubeStellar. See above.

To do the deployment and prepare to use it you will be using the commands defined for that. These require your shell to be in a state where kubectl manipulates the hosting cluster (the Kubernetes cluster into which you want to deploy kcp and KubeStellar), either by virtue of having set your KUBECONFIG envar appropriately or putting the relevant contents in ~/.kube/config or by passing --kubeconfig explicitly on the following command lines.

Use the kubectl kubestellar deploy command to do the deployment.

Then use the kubectl kubestellar get-external-kubeconfig command to put into a file the kubeconfig that you will use as a user of kcp and KubeStellar. Do not overwrite the kubeconfig file for your hosting cluster. But do update your KUBECONFIG envar setting or remember to pass the new file with --kubeconfig on the command lines when using kcp or KubeStellar. For example, you might use the following commands to fetch and start using that kubeconfig file; the first assumes that you deployed the core into a Kubernetes namespace named "kubestellar".

kubectl kubestellar get-external-kubeconfig -n kubestellar -o kcs.kubeconfig
export KUBECONFIG=$(pwd)/kcs.kubeconfig

Note that you now care about two different kubeconfig files: the one that you were using earlier, which holds the contexts for your kind clusters, and the one that you just fetched and started using for working with the KubeStellar interface. The remainder of this document assumes that your kind cluster contexts are in ~/.kube/config.

Create SyncTarget and Location objects to represent the florin and guilder clusters#

Use the following two commands to put inventory objects in the IMW at root:imw1 that was automatically created during deployment of KubeStellar. They label both florin and guilder with env=prod, and also label guilder with extended=yes.

kubectl ws root:imw1
kubectl kubestellar ensure location florin  loc-name=florin  env=prod
kubectl kubestellar ensure location guilder loc-name=guilder env=prod extended=yes
echo "decribe the florin location object"
kubectl describe location.edge.kubestellar.io florin

Those two script invocations are equivalent to creating the following four objects plus the kcp APIBinding objects that import the definition of the KubeStellar API.

apiVersion: edge.kubestellar.io/v2alpha1
kind: SyncTarget
metadata:
  name: florin
  labels:
    id: florin
    loc-name: florin
    env: prod
---
apiVersion: edge.kubestellar.io/v2alpha1
kind: Location
metadata:
  name: florin
  labels:
    loc-name: florin
    env: prod
spec:
  resource: {group: edge.kubestellar.io, version: v2alpha1, resource: synctargets}
  instanceSelector:
    matchLabels: {id: florin}
---
apiVersion: edge.kubestellar.io/v2alpha1
kind: SyncTarget
metadata:
  name: guilder
  labels:
    id: guilder
    loc-name: guilder
    env: prod
    extended: yes
---
apiVersion: edge.kubestellar.io/v2alpha1
kind: Location
metadata:
  name: guilder
  labels:
    loc-name: guilder
    env: prod
    extended: yes
spec:
  resource: {group: edge.kubestellar.io, version: v2alpha1, resource: synctargets}
  instanceSelector:
    matchLabels: {id: guilder}

That script also deletes the Location named default, which is not used in this PoC, if it shows up.

The mailbox controller#

The mailbox controller is one of the central controllers of KubeStellar. If you have deployed the KubeStellar core as Kubernetes workload then this controller is already running in a pod in your hosting cluster. If instead you are running these controllers as bare processes then launch this controller as follows.

kubectl ws root:espw
mailbox-controller -v=2 &
sleep 10

This controller is in charge of maintaining the collection of mailbox workspaces, which are an implementation detail not intended for user consumption. You can use the following command to wait for the appearance of the mailbox workspaces implied by the florin and guilder SyncTarget objects that you made earlier.

kubectl ws root
while [ $(kubectl ws tree | grep "\-mb\-" | wc -l) -ne 2 ]; do
  sleep 10
done

If it is working correctly, lines like the following will appear in the controller's log (which is being written into your shell if you ran the controller as a bare process above, otherwise you can fetch as directed).

...
I0721 17:37:10.186848  189094 main.go:206] "Found APIExport view" exportName="e
dge.kubestellar.io" serverURL="https://10.0.2.15:6443/services/apiexport/cseslli1ddit3s
a5/edge.kubestellar.io"
...
I0721 19:17:21.906984  189094 controller.go:300] "Created APIBinding" worker=1
mbwsName="1d55jhazpo3d3va6-mb-551bebfd-b75e-47b1-b2e0-ff0a4cb7e006" mbwsCluster
="32x6b03ixc49cj48" bindingName="bind-edge" resourceVersion="1247"
...
I0721 19:18:56.203057  189094 controller.go:300] "Created APIBinding" worker=0
mbwsName="1d55jhazpo3d3va6-mb-732cf72a-1ca9-4def-a5e7-78fd0e36e61c" mbwsCluster
="q31lsrpgur3eg9qk" bindingName="bind-edge" resourceVersion="1329"
^C

You need a -v setting of 2 or numerically higher to get log messages about individual mailbox workspaces.

A mailbox workspace name is distinguished by -mb- separator. You can get a listing of those mailbox workspaces as follows.

kubectl ws root
kubectl get Workspaces
NAME                                                       TYPE          REGION   PHASE   URL                                                     AGE
1t82bk54r6gjnzsp-mb-1a045336-8178-4026-8a56-5cd5609c0ec1   universal              Ready   https://192.168.58.123:6443/clusters/1najcltzt2nqax47   50s
1t82bk54r6gjnzsp-mb-f0a82ab1-63f4-49ea-954d-3a41a35a9f1c   universal              Ready   https://192.168.58.123:6443/clusters/1y7wll1dz806h3sb   50s
compute                                                    universal              Ready   https://172.20.144.39:6443/clusters/root:compute        6m8s
espw                                                       organization           Ready   https://172.20.144.39:6443/clusters/root:espw           2m4s
imw1                                                       organization           Ready   https://172.20.144.39:6443/clusters/root:imw1           1m9s

More usefully, using custom columns you can get a listing that shows the name of the associated SyncTarget.

kubectl get Workspace -o "custom-columns=NAME:.metadata.name,SYNCTARGET:.metadata.annotations['edge\.kubestellar\.io/sync-target-name'],CLUSTER:.spec.cluster"
NAME                                                       SYNCTARGET   CLUSTER
1t82bk54r6gjnzsp-mb-1a045336-8178-4026-8a56-5cd5609c0ec1   florin       1najcltzt2nqax47
1t82bk54r6gjnzsp-mb-f0a82ab1-63f4-49ea-954d-3a41a35a9f1c   guilder      1y7wll1dz806h3sb
compute                                                    <none>       mqnl7r5f56hswewy
espw                                                       <none>       2n88ugkhysjbxqp5
imw1                                                       <none>       4d2r9stcyy2qq5c1

Also: if you ever need to look up just one mailbox workspace by SyncTarget name, you could do it as follows.

GUILDER_WS=$(kubectl get Workspace -o json | jq -r '.items | .[] | .metadata | select(.annotations ["edge.kubestellar.io/sync-target-name"] == "guilder") | .name')
echo The guilder mailbox workspace name is $GUILDER_WS
The guilder mailbox workspace name is 1t82bk54r6gjnzsp-mb-f0a82ab1-63f4-49ea-954d-3a41a35a9f1c

FLORIN_WS=$(kubectl get Workspace -o json | jq -r '.items | .[] | .metadata | select(.annotations ["edge.kubestellar.io/sync-target-name"] == "florin") | .name')
echo The florin mailbox workspace name is $FLORIN_WS
The florin mailbox workspace name is 1t82bk54r6gjnzsp-mb-1a045336-8178-4026-8a56-5cd5609c0ec1

Connect guilder edge cluster with its mailbox workspace#

The following command will (a) create, in the mailbox workspace for guilder, an identity and authorizations for the edge syncer and (b) write a file containing YAML for deploying the syncer in the guilder cluster.

kubectl kubestellar prep-for-syncer --imw root:imw1 guilder
Current workspace is "root:imw1".
Current workspace is "root:espw".
Current workspace is "root:espw:1t82bk54r6gjnzsp-mb-f0a82ab1-63f4-49ea-954d-3a41a35a9f1c" (type root:universal).
Creating service account "kubestellar-syncer-guilder-wfeig2lv"
Creating cluster role "kubestellar-syncer-guilder-wfeig2lv" to give service account "kubestellar-syncer-guilder-wfeig2lv"

 1. write and sync access to the synctarget "kubestellar-syncer-guilder-wfeig2lv"
 2. write access to apiresourceimports.

Creating or updating cluster role binding "kubestellar-syncer-guilder-wfeig2lv" to bind service account "kubestellar-syncer-guilder-wfeig2lv" to cluster role "kubestellar-syncer-guilder-wfeig2lv".

Wrote WEC manifest to guilder-syncer.yaml for namespace "kubestellar-syncer-guilder-wfeig2lv". Use

  KUBECONFIG=<workload-execution-cluster-config> kubectl apply -f "guilder-syncer.yaml"

to apply it. Use

  KUBECONFIG=<workload-execution-cluster-config> kubectl get deployment -n "kubestellar-syncer-guilder-wfeig2lv" kubestellar-syncer-guilder-wfeig2lv

to verify the syncer pod is running.
Current workspace is "root:espw".

The file written was, as mentioned in the output, guilder-syncer.yaml. Next kubectl apply that to the guilder cluster. That will look something like the following; adjust as necessary to make kubectl manipulate your guilder cluster.

KUBECONFIG=~/.kube/config kubectl --context kind-guilder apply -f guilder-syncer.yaml
namespace/kubestellar-syncer-guilder-wfeig2lv created
serviceaccount/kubestellar-syncer-guilder-wfeig2lv created
secret/kubestellar-syncer-guilder-wfeig2lv-token created
clusterrole.rbac.authorization.k8s.io/kubestellar-syncer-guilder-wfeig2lv created
clusterrolebinding.rbac.authorization.k8s.io/kubestellar-syncer-guilder-wfeig2lv created
secret/kubestellar-syncer-guilder-wfeig2lv created
deployment.apps/kubestellar-syncer-guilder-wfeig2lv created

You might check that the syncer is running, as follows.

KUBECONFIG=~/.kube/config kubectl --context kind-guilder get deploy -A
NAMESPACE                          NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
kubestellar-syncer-guilder-saaywsu5   kubestellar-syncer-guilder-saaywsu5   1/1     1            1           52s
kube-system                        coredns                            2/2     2            2           35m
local-path-storage                 local-path-provisioner             1/1     1            1           35m

Connect florin edge cluster with its mailbox workspace#

Do the analogous stuff for the florin cluster.

kubectl kubestellar prep-for-syncer --imw root:imw1 florin
Current workspace is "root:imw1".
Current workspace is "root:espw".
Current workspace is "root:espw:1t82bk54r6gjnzsp-mb-1a045336-8178-4026-8a56-5cd5609c0ec1" (type root:universal).
Creating service account "kubestellar-syncer-florin-32uaph9l"
Creating cluster role "kubestellar-syncer-florin-32uaph9l" to give service account "kubestellar-syncer-florin-32uaph9l"

 1. write and sync access to the synctarget "kubestellar-syncer-florin-32uaph9l"
 2. write access to apiresourceimports.

Creating or updating cluster role binding "kubestellar-syncer-florin-32uaph9l" to bind service account "kubestellar-syncer-florin-32uaph9l" to cluster role "kubestellar-syncer-florin-32uaph9l".

Wrote WEC manifest to florin-syncer.yaml for namespace "kubestellar-syncer-florin-32uaph9l". Use

  KUBECONFIG=<workload-execution-cluster-config> kubectl apply -f "florin-syncer.yaml"

to apply it. Use

  KUBECONFIG=<workload-execution-cluster-config> kubectl get deployment -n "kubestellar-syncer-florin-32uaph9l" kubestellar-syncer-florin-32uaph9l

to verify the syncer pod is running.
Current workspace is "root:espw".

And deploy the syncer in the florin cluster.

KUBECONFIG=~/.kube/config kubectl --context kind-florin apply -f florin-syncer.yaml 
namespace/kubestellar-syncer-florin-32uaph9l created
serviceaccount/kubestellar-syncer-florin-32uaph9l created
secret/kubestellar-syncer-florin-32uaph9l-token created
clusterrole.rbac.authorization.k8s.io/kubestellar-syncer-florin-32uaph9l created
clusterrolebinding.rbac.authorization.k8s.io/kubestellar-syncer-florin-32uaph9l created
secret/kubestellar-syncer-florin-32uaph9l created
deployment.apps/kubestellar-syncer-florin-32uaph9l created

Stage 2#

Placement and Where Resolving

Stage 2 creates two workloads, called "common" and "special", and lets the Where Resolver react. It has the following steps.

Create and populate the workload management workspace for the common workload#

One of the workloads is called "common", because it will go to both edge clusters. The other one is called "special".

In this example, each workload description goes in its own workload management workspace (WMW). Start by creating a WMW for the common workload, with the following commands.

kubectl ws root
kubectl kubestellar ensure wmw wmw-c

This is equivalent to creating that workspace and then entering it and creating the following two APIBinding objects.

apiVersion: apis.kcp.io/v1alpha1
kind: APIBinding
metadata:
  name: bind-espw
spec:
  reference:
    export:
      path: root:espw
      name: edge.kubestellar.io
---
apiVersion: apis.kcp.io/v1alpha1
kind: APIBinding
metadata:
  name: bind-kube
spec:
  reference:
    export:
      path: "root:compute"
      name: kubernetes
sleep 15

Next, use kubectl to create the following workload objects in that workspace. The workload in this example in an Apache httpd server that serves up a very simple web page, conveyed via a Kubernetes ConfigMap that is mounted as a volume for the httpd pod.

kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: commonstuff
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: commonstuff
  name: httpd-htdocs
  annotations:
    edge.kubestellar.io/expand-parameters: "true"
data:
  index.html: |
    <!DOCTYPE html>
    <html>
      <body>
        This is a common web site.
        Running in %(loc-name).
      </body>
    </html>
---
apiVersion: edge.kubestellar.io/v2alpha1
kind: Customizer
metadata:
  namespace: commonstuff
  name: example-customizer
  annotations:
    edge.kubestellar.io/expand-parameters: "true"
replacements:
- path: "$.spec.template.spec.containers.0.env.0.value"
  value: '"env is %(env)"'
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  namespace: commonstuff
  name: commond
  annotations:
    edge.kubestellar.io/customizer: example-customizer
spec:
  selector: {matchLabels: {app: common} }
  template:
    metadata:
      labels: {app: common}
    spec:
      containers:
      - name: httpd
        env:
        - name: EXAMPLE_VAR
          value: example value
        image: library/httpd:2.4
        ports:
        - name: http
          containerPort: 80
          hostPort: 8081
          protocol: TCP
        volumeMounts:
        - name: htdocs
          readOnly: true
          mountPath: /usr/local/apache2/htdocs
      volumes:
      - name: htdocs
        configMap:
          name: httpd-htdocs
          optional: false
EOF
sleep 10

Finally, use kubectl to create the following EdgePlacement object. Its "where predicate" (the locationSelectors array) has one label selector that matches both Location objects created earlier, thus directing the common workload to both edge clusters.

kubectl apply -f - <<EOF
apiVersion: edge.kubestellar.io/v2alpha1
kind: EdgePlacement
metadata:
  name: edge-placement-c
spec:
  locationSelectors:
  - matchLabels: {"env":"prod"}
  downsync:
  - apiGroup: ""
    resources: [ configmaps ]
    namespaces: [ commonstuff ]
    objectNames: [ httpd-htdocs ]
  - apiGroup: apps
    resources: [ replicasets ]
    namespaces: [ commonstuff ]
  wantSingletonReportedState: true
  upsync:
  - apiGroup: "group1.test"
    resources: ["sprockets", "flanges"]
    namespaces: ["orbital"]
    names: ["george", "cosmo"]
  - apiGroup: "group2.test"
    resources: ["cogs"]
    names: ["william"]
EOF
sleep 10

Create and populate the workload management workspace for the special workload#

Use the following kubectl commands to create the WMW for the special workload.

kubectl ws root
kubectl kubestellar ensure wmw wmw-s

In this workload we will also demonstrate how to downsync objects whose kind is defined by a CustomResourceDefinition object. We will use the one from the Kubernetes documentation for CRDs, modified so that the resource it defines is in the category all. First, create the definition object with the following command.

kubectl apply -f - <<EOF
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  # name must match the spec fields below, and be in the form: <plural>.<group>
  name: crontabs.stable.example.com
spec:
  # group name to use for REST API: /apis/<group>/<version>
  group: stable.example.com
  # list of versions supported by this CustomResourceDefinition
  versions:
    - name: v1
      # Each version can be enabled/disabled by Served flag.
      served: true
      # One and only one version must be marked as the storage version.
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              properties:
                cronSpec:
                  type: string
                image:
                  type: string
                replicas:
                  type: integer
  # either Namespaced or Cluster
  scope: Namespaced
  names:
    # plural name to be used in the URL: /apis/<group>/<version>/<plural>
    plural: crontabs
    # singular name to be used as an alias on the CLI and for display
    singular: crontab
    # kind is normally the CamelCased singular type. Your resource manifests use this.
    kind: CronTab
    # shortNames allow shorter string to match your resource on the CLI
    shortNames:
    - ct
    categories:
    - all
EOF

Next, use the following command to wait for the apiserver to process that definition.

kubectl wait --for condition=Established crd crontabs.stable.example.com

Next, use kubectl to create the following workload objects in that workspace. The APIService object included here does not contribute to the httpd workload but is here to demonstrate that APIService objects can be downsynced.

kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: specialstuff
  labels: {special: "yes"}
  annotations: {just-for: fun}
---
apiVersion: "stable.example.com/v1"
kind: CronTab
metadata:
  name: my-new-cron-object
  namespace: specialstuff
spec:
  cronSpec: "* * * * */5"
  image: my-awesome-cron-image
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: specialstuff
  name: httpd-htdocs
  annotations:
    edge.kubestellar.io/expand-parameters: "true"
data:
  index.html: |
    <!DOCTYPE html>
    <html>
      <body>
        This is a special web site.
        Running in %(loc-name).
      </body>
    </html>
---
apiVersion: edge.kubestellar.io/v2alpha1
kind: Customizer
metadata:
  namespace: specialstuff
  name: example-customizer
  annotations:
    edge.kubestellar.io/expand-parameters: "true"
replacements:
- path: "$.spec.template.spec.containers.0.env.0.value"
  value: '"in %(env) env"'
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: specialstuff
  name: speciald
  annotations:
    edge.kubestellar.io/customizer: example-customizer
spec:
  selector: {matchLabels: {app: special} }
  template:
    metadata:
      labels: {app: special}
    spec:
      containers:
      - name: httpd
        env:
        - name: EXAMPLE_VAR
          value: example value
        image: library/httpd:2.4
        ports:
        - name: http
          containerPort: 80
          hostPort: 8082
          protocol: TCP
        volumeMounts:
        - name: htdocs
          readOnly: true
          mountPath: /usr/local/apache2/htdocs
      volumes:
      - name: htdocs
        configMap:
          name: httpd-htdocs
          optional: false
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  name: v1090.example.my
spec:
  group: example.my
  groupPriorityMinimum: 360
  service:
    name: my-service
    namespace: my-example
  version: v1090
  versionPriority: 42
EOF
sleep 10

Finally, use kubectl to create the following EdgePlacement object. Its "where predicate" (the locationSelectors array) has one label selector that matches only one of the Location objects created earlier, thus directing the special workload to just one edge cluster.

The "what predicate" explicitly includes the Namespace object named "specialstuff", which causes all of its desired state (including labels and annotations) to be downsynced. This contrasts with the common EdgePlacement, which does not explicitly mention the commonstuff namespace, relying on the implicit creation of namespaces as needed in the WECs.

kubectl apply -f - <<EOF
apiVersion: edge.kubestellar.io/v2alpha1
kind: EdgePlacement
metadata:
  name: edge-placement-s
spec:
  locationSelectors:
  - matchLabels: {"env":"prod","extended":"yes"}
  downsync:
  - apiGroup: ""
    resources: [ configmaps ]
    namespaceSelectors:
    - matchLabels: {"special":"yes"}
  - apiGroup: apps
    resources: [ deployments ]
    namespaceSelectors:
    - matchLabels: {"special":"yes"}
    objectNames: [ speciald ]
  - apiGroup: apiregistration.k8s.io
    resources: [ apiservices ]
    objectNames: [ v1090.example.my ]
  - apiGroup: stable.example.com
    resources: [ crontabs ]
    namespaces: [ specialstuff ]
    objectNames: [ my-new-cron-object ]
  - apiGroup: ""
    resources: [ namespaces ]
    objectNames: [ specialstuff ]
  wantSingletonReportedState: true
  upsync:
  - apiGroup: "group1.test"
    resources: ["sprockets", "flanges"]
    namespaces: ["orbital"]
    names: ["george", "cosmo"]
  - apiGroup: "group3.test"
    resources: ["widgets"]
    names: ["*"]
EOF
sleep 10

Where Resolver#

In response to each EdgePlacement, the Where Resolver will create a corresponding SinglePlacementSlice object. These will indicate the following resolutions of the "where" predicates.

EdgePlacement Resolved Where
edge-placement-c florin, guilder
edge-placement-s guilder

If you have deployed the KubeStellar core in a Kubernetes cluster then the where resolver is running in a pod there. If instead you are running the core controllers are bare processes then you can use the following commands to launch the where-resolver; it requires the ESPW to be the current kcp workspace at start time.

kubectl ws root:espw
kubestellar-where-resolver &
sleep 10

The following commands wait until the where-resolver has done its job for the common and special EdgePlacement objects.

kubectl ws root:wmw-c
while ! kubectl get SinglePlacementSlice &> /dev/null; do
  sleep 10
done
kubectl ws root:wmw-s
while ! kubectl get SinglePlacementSlice &> /dev/null; do
  sleep 10
done

If things are working properly then you will see log lines like the following (among many others) in the where-resolver's log.

I0423 01:33:37.036752   11305 main.go:212] "Found APIExport view" exportName="edge.kubestellar.io" serverURL="https://192.168.58.123:6443/services/apiexport/7qkse309upzrv0fy/edge.kubestellar.io"
...
I0423 01:33:37.320859   11305 reconcile_on_location.go:192] "updated SinglePlacementSlice" controller="kubestellar-where-resolver" triggeringKind=Location key="apmziqj9p9fqlflm|florin" locationWorkspace="apmziqj9p9fqlflm" location="florin" workloadWorkspace="10l175x6ejfjag3e" singlePlacementSlice="edge-placement-c"
...
I0423 01:33:37.391772   11305 reconcile_on_location.go:192] "updated SinglePlacementSlice" controller="kubestellar-where-resolver" triggeringKind=Location key="apmziqj9p9fqlflm|guilder" locationWorkspace="apmziqj9p9fqlflm" location="guilder" workloadWorkspace="10l175x6ejfjag3e" singlePlacementSlice="edge-placement-c"

Check out a SinglePlacementSlice object as follows.

kubectl ws root:wmw-c
Current workspace is "root:wmw-c".

kubectl get SinglePlacementSlice -o yaml
apiVersion: v1
items:
- apiVersion: edge.kubestellar.io/v2alpha1
  destinations:
  - cluster: apmziqj9p9fqlflm
    locationName: florin
    syncTargetName: florin
    syncTargetUID: b8c64c64-070c-435b-b3bd-9c0f0c040a54
  - cluster: apmziqj9p9fqlflm
    locationName: guilder
    syncTargetName: guilder
    syncTargetUID: bf452e1f-45a0-4d5d-b35c-ef1ece2879ba
  kind: SinglePlacementSlice
  metadata:
    annotations:
      kcp.io/cluster: 10l175x6ejfjag3e
    creationTimestamp: "2023-04-23T05:33:37Z"
    generation: 4
    name: edge-placement-c
    ownerReferences:
    - apiVersion: edge.kubestellar.io/v2alpha1
      kind: EdgePlacement
      name: edge-placement-c
      uid: 199cfe1e-48d9-4351-af5c-e66c83bf50dd
    resourceVersion: "1316"
    uid: b5db1f9d-1aed-4a25-91da-26dfbb5d8879
kind: List
metadata:
  resourceVersion: ""

Also check out the SinglePlacementSlice objects in root:wmw-s. It should go similarly, but the destinations should include only the entry for guilder.

Stage 3#

Placement translation

In Stage 3, in response to the EdgePlacement and SinglePlacementSlice objects, the placement translator will copy the workload prescriptions into the mailbox workspaces and create SyncerConfig objects there.

If you have deployed the KubeStellar core as workload in a Kubernetes cluster then the placement translator is running in a Pod there. If instead you are running the core controllers as bare processes then use the following commands to launch the placement translator; it requires the ESPW to be current at start time.

kubectl ws root:espw
placement-translator &
sleep 10

The following commands wait for the placement translator to get its job done for this example.

# wait until SyncerConfig, ReplicaSets and Deployments are ready
mbxws=($FLORIN_WS $GUILDER_WS)
for ii in "${mbxws[@]}"; do
  kubectl ws root:$ii
  # wait for SyncerConfig resource
  while ! kubectl get SyncerConfig the-one &> /dev/null; do
    sleep 10
  done
  echo "* SyncerConfig resource exists in mailbox $ii"
  # wait for ReplicaSet resource
  while ! kubectl get rs &> /dev/null; do
    sleep 10
  done
  echo "* ReplicaSet resource exists in mailbox $ii"
  # wait until ReplicaSet in mailbox
  while ! kubectl get rs -n commonstuff commond; do
    sleep 10
  done
  echo "* commonstuff ReplicaSet in mailbox $ii"
done
# check for deployment in guilder
while ! kubectl get deploy -A &> /dev/null; do
  sleep 10
done
echo "* Deployment resource exists"
while ! kubectl get deploy -n specialstuff speciald; do
  sleep 10
done
echo "* specialstuff Deployment in its mailbox"
# wait for crontab CRD to be established
while ! kubectl get crd crontabs.stable.example.com; do sleep 10; done
kubectl wait --for condition=Established crd crontabs.stable.example.com
echo "* CronTab CRD is established in its mailbox"
# wait for my-new-cron-object to be in its mailbox
while ! kubectl get ct -n specialstuff my-new-cron-object; do sleep 10; done
echo "* CronTab my-new-cron-object is in its mailbox"

You can check that the common workload's ReplicaSet objects got to their mailbox workspaces with the following command. It will list the two copies of that object, each with an annotation whose key is kcp.io/cluster and whose value is the kcp logicalcluster.Name of the mailbox workspace; those names appear in the "CLUSTER" column of the custom-columns listing near the end of the section above about the mailbox controller.

kubestellar-list-syncing-objects --api-group apps --api-kind ReplicaSet
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  annotations:
    edge.kubestellar.io/customizer: example-customizer
    kcp.io/cluster: 1y7wll1dz806h3sb
    ... (lots of other details) ...
  name: commond
  namespace: commonstuff
spec:
  ... (the customized spec) ...
status:
  ... (may be filled in by the time you look) ...

---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  annotations:
    edge.kubestellar.io/customizer: example-customizer
    kcp.io/cluster: 1najcltzt2nqax47
    ... (lots of other details) ...
  name: commond
  namespace: commonstuff
spec:
  ... (the customized spec) ...
status:
  ... (may be filled in by the time you look) ...

That display should show objects in two different mailbox workspaces; the following command checks that.

test $(kubestellar-list-syncing-objects --api-group apps --api-kind ReplicaSet | grep "^ *kcp.io/cluster: [0-9a-z]*$" | sort | uniq | wc -l) -ge 2

The various APIBinding and CustomResourceDefinition objects involved should also appear in the mailbox workspaces.

test $(kubestellar-list-syncing-objects --api-group apis.kcp.io --api-version v1alpha1 --api-kind APIBinding | grep -cw "name: bind-apps") -ge 2
kubestellar-list-syncing-objects --api-group apis.kcp.io --api-version v1alpha1 --api-kind APIBinding | grep -w "name: bind-kubernetes"
kubestellar-list-syncing-objects --api-group apiextensions.k8s.io --api-kind CustomResourceDefinition | fgrep -w "name: crontabs.stable.example.com"

The APIService of the special workload should also appear, along with some error messages about APIService not being known in the other mailbox workspaces.

kubestellar-list-syncing-objects --api-group apiregistration.k8s.io --api-kind APIService 2>&1 | grep -v "APIService.*the server could not find the requested resource" | fgrep -w "name: v1090.example.my"

The florin cluster gets only the common workload. Examine florin's SyncerConfig as follows. Utilize the name of the mailbox workspace for florin (which you stored in Stage 1) here.

kubectl ws root:$FLORIN_WS
Current workspace is "root:1t82bk54r6gjnzsp-mb-1a045336-8178-4026-8a56-5cd5609c0ec1" (type root:universal).
kubectl get SyncerConfig the-one -o yaml
apiVersion: edge.kubestellar.io/v2alpha1
kind: SyncerConfig
metadata:
  annotations:
    kcp.io/cluster: 12299slctppnhjnn
  creationTimestamp: "2023-04-23T05:39:56Z"
  generation: 3
  name: the-one
  resourceVersion: "1323"
  uid: 8840fee6-37dc-407e-ad01-2ad59389d4ff
spec:
  namespaceScope: {}
  namespacedObjects:
  - apiVersion: v1
    group: ""
    objectsByNamespace:
    - names:
      - httpd-htdocs
      namespace: commonstuff
    resource: configmaps
  - apiVersion: v1
    group: apps
    objectsByNamespace:
    - names:
      - commond
      namespace: commonstuff
    resource: replicasets
  upsync:
  - apiGroup: group1.test
    names:
    - george
    - cosmo
    namespaces:
    - orbital
    resources:
    - sprockets
    - flanges
  - apiGroup: group2.test
    names:
    - william
    resources:
    - cogs
status: {}

The guilder cluster gets both the common and special workloads. Examine guilder's SyncerConfig object and workloads as follows, using the mailbox workspace name that you stored in Stage 1.

kubectl ws root:$GUILDER_WS
Current workspace is "root:1t82bk54r6gjnzsp-mb-f0a82ab1-63f4-49ea-954d-3a41a35a9f1c" (type root:universal).

kubectl get SyncerConfig the-one -o yaml
apiVersion: edge.kubestellar.io/v2alpha1
kind: SyncerConfig
metadata:
  annotations:
    kcp.io/cluster: yk9a66vjms1pi8hu
  creationTimestamp: "2023-04-23T05:39:56Z"
  generation: 4
  name: the-one
  resourceVersion: "1325"
  uid: 3da056c7-0d5c-45a3-9d91-d04f04415f30
spec:
  clusterScope:
  - apiVersion: v1
    group: ""
    objects:
    - specialstuff
    resource: namespaces
  - apiVersion: v1
    group: apiextensions.k8s.io
    objects:
    - crontabs.stable.example.com
    resource: customresourcedefinitions
  - apiVersion: v1
    group: apiregistration.k8s.io
    objects:
    - v1090.example.my
    resource: apiservices
  namespaceScope: {}
  namespacedObjects:
  - apiVersion: v1
    group: apps
    objectsByNamespace:
    - names:
      - commond
      namespace: commonstuff
    resource: replicasets
  - apiVersion: v1
    group: stable.example.com
    objectsByNamespace:
    - names:
      - my-new-cron-object
      namespace: specialstuff
    resource: crontabs
  - apiVersion: v1
    group: apps
    objectsByNamespace:
    - names:
      - speciald
      namespace: specialstuff
    resource: deployments
  - apiVersion: v1
    group: ""
    objectsByNamespace:
    - names:
      - httpd-htdocs
      namespace: commonstuff
    - names:
      - httpd-htdocs
      namespace: specialstuff
    resource: configmaps
  upsync:
  - apiGroup: group3.test
    names:
    - '*'
    resources:
    - widgets
  - apiGroup: group1.test
    names:
    - george
    - cosmo
    namespaces:
    - orbital
    resources:
    - sprockets
    - flanges
  - apiGroup: group2.test
    names:
    - william
    resources:
    - cogs
status: {}

You can check for specific workload objects here with the following command.

kubectl get deployments,replicasets -A
NAMESPACE      NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
specialstuff   deployment.apps/speciald   0/0     1            0           12m

NAMESPACE     NAME                      DESIRED   CURRENT   READY   AGE
commonstuff   replicaset.apps/commond   0         1         1       7m4s

Stage 4#

Syncer effects

In Stage 4, the edge syncer does its thing. Actually, it should have done it as soon as the relevant inputs became available in stage 3. Now we examine what happened.

You can check that the workloads are running in the edge clusters as they should be.

The syncer does its thing between the florin cluster and its mailbox workspace. This is driven by the SyncerConfig object named the-one in that mailbox workspace.

The syncer does its thing between the guilder cluster and its mailbox workspace. This is driven by the SyncerConfig object named the-one in that mailbox workspace.

Using the kubeconfig that kind modified, examine the florin cluster. Find just the commonstuff namespace and the commond Deployment.

( KUBECONFIG=~/.kube/config
  let tries=1
  while ! kubectl --context kind-florin get ns commonstuff &> /dev/null; do
    if (( tries >= 30)); then
      echo 'The commonstuff namespace failed to appear in florin!' >&2
      exit 10
    fi
    let tries=tries+1
    sleep 10
  done
  kubectl --context kind-florin get ns
)
NAME                                 STATUS   AGE
commonstuff                          Active   6m51s
default                              Active   57m
kubestellar-syncer-florin-1t9zgidy   Active   17m
kube-node-lease                      Active   57m
kube-public                          Active   57m
kube-system                          Active   57m
local-path-storage                   Active   57m

sleep 15

KUBECONFIG=~/.kube/config kubectl --context kind-florin get deploy,rs -A | egrep 'NAME|stuff'
NAMESPACE                            NAME                                                 READY   UP-TO-DATE   AVAILABLE   AGE
NAMESPACE                            NAME                                                            DESIRED   CURRENT   READY   AGE
commonstuff                          replicaset.apps/commond                                         1         1         1       13m

Examine the guilder cluster. Find both workload namespaces, the Deployment, and both ReplicaSets.

sleep 15

KUBECONFIG=~/.kube/config kubectl --context kind-guilder get ns | egrep NAME\|stuff
NAME                               STATUS   AGE
commonstuff                        Active   8m33s
specialstuff                       Active   8m33s

KUBECONFIG=~/.kube/config kubectl --context kind-guilder get deploy,rs -A | egrep NAME\|stuff
NAMESPACE                             NAME                                                  READY   UP-TO-DATE   AVAILABLE   AGE
specialstuff                          deployment.apps/speciald                              1/1     1            1           23m
NAMESPACE                             NAME                                                            DESIRED   CURRENT   READY   AGE
commonstuff                           replicaset.apps/commond                                         1         1         1       23m
specialstuff                          replicaset.apps/speciald-76cdbb69b5                             1         1         1       14s

Examine the APIService objects in the guilder cluster, find the one named v1090.example.my. It is broken because it refers to a Service object that we have not bothered to create.

KUBECONFIG=~/.kube/config kubectl --context kind-guilder get apiservices | grep 1090
v1090.example.my                       my-example/my-service   False (ServiceNotFound)   2m39s

See the crontab in the guilder cluster.

KUBECONFIG=~/.kube/config kubectl --context kind-guilder get crontabs -n specialstuff
NAME                 AGE
my-new-cron-object   37m

Examining the common workload in the guilder cluster, for example, will show that the replacement-style customization happened.

sleep 15

KUBECONFIG=~/.kube/config kubectl --context kind-guilder get rs -n commonstuff commond -o yaml
...
      containers:
      - env:
        - name: EXAMPLE_VAR
          value: env is prod
        image: library/httpd:2.4
        imagePullPolicy: IfNotPresent
        name: httpd
...

Check that the common workload on the florin cluster is working.

let tries=1
while ! curl http://localhost:8094 &> /dev/null; do
  if (( tries >= 30 )); then
    echo 'The common workload failed to come up on florin!' >&2
    exit 10
  fi
  let tries=tries+1
  sleep 10
done
curl http://localhost:8094
<!DOCTYPE html>
<html>
  <body>
    This is a common web site.
    Running in florin.
  </body>
</html>

Check that the special workload on the guilder cluster is working.

let tries=1
while ! curl http://localhost:8097 &> /dev/null; do
  if (( tries >= 30 )); then
    echo 'The special workload failed to come up on guilder!' >&2
    exit 10
  fi
  let tries=tries+1
  sleep 10
done
curl http://localhost:8097
<!DOCTYPE html>
<html>
  <body>
    This is a special web site.
    Running in guilder.
  </body>
</html>

Check that the common workload on the guilder cluster is working.

let tries=1
while ! curl http://localhost:8096 &> /dev/null; do
  if (( tries >= 30 )); then
    echo 'The common workload failed to come up on guilder!' >&2
    exit 10
  fi
  let tries=tries+1
  sleep 10
done
curl http://localhost:8096
<!DOCTYPE html>
<html>
  <body>
    This is a common web site.
    Running in guilder.
  </body>
</html>

Stage 5#

Singleton reported state return#

The two EdgePlacement objects above assert that the expected number of executing copies of their matching workload objects is 1 and request return of reported state to the WDS when the number of executing copies is exactly 1.

For the common workload, that assertion is not correct: the number of executing copies should be 2. The assertion causes the actual number of executing copies to be reported. Check that the reported number is 2.

kubectl ws root:wmw-c
kubectl get rs -n commonstuff commond -o yaml | grep 'kubestellar.io/executing-count: "2"' || { kubectl get rs -n commonstuff commond -o yaml; false; }

For the special workload, the number of executing copies should be 1. Check that the reported number agrees.

kubectl ws root:wmw-s
kubectl get deploy -n specialstuff speciald -o yaml | grep 'kubestellar.io/executing-count: "1"' || { kubectl get deploy -n specialstuff speciald -o yaml; false; }

Look at the status section of the "speciald" Deployment and see that it has been filled in with the information from the guilder cluster.

kubectl get deploy -n specialstuff speciald -o yaml

Current status might not be there yet. The following command waits for status that reports that there is a special workload pod "ready".

let count=1
while true; do
    rsyaml=$(kubectl get deploy -n specialstuff speciald -o yaml)
    if grep 'readyReplicas: 1' <<<"$rsyaml"
    then break
    fi
    echo ""
    echo "Got:"
    cat <<<"$rsyaml"
    if (( count > 5 )); then
        echo 'Giving up!' >&2
        false
    fi
    sleep 15
    let count=count+1
done

Status Summarization (aspirational)#

Summarization for special

The status summarizer, driven by the EdgePlacement and SinglePlacementSlice for the special workload, creates a status summary object in the specialstuff namespace in the special workload workspace holding a summary of the corresponding Deployment objects. In this case there is just one such object, in the mailbox workspace for the guilder cluster.

Summarization for common

The status summarizer, driven by the EdgePlacement and SinglePlacementSlice for the common workload, creates a status summary object in the commonstuff namespace in the common workload workspace holding a summary of the corresponding Deployment objects. Those are the commond Deployment objects in the two mailbox workspaces.

Teardown the environment#

To remove the example usage, delete the IMW and WMW and kind clusters run the following commands:

rm florin-syncer.yaml guilder-syncer.yaml || true
kubectl ws root
kubectl delete workspace imw1
kubectl delete workspace $FLORIN_WS
kubectl delete workspace $GUILDER_WS
kubectl kubestellar remove wmw wmw-c
kubectl kubestellar remove wmw wmw-s
kind delete cluster --name florin
kind delete cluster --name guilder

Teardown of KubeStellar depends on which style of deployment was used.

Teardown bare processes#

The following command will stop whatever KubeStellar controllers are running.

kubestellar stop

Stop and uninstall KubeStellar and kcp with the following command:

remove-kubestellar

Teardown Kubernetes workload#

With kubectl configured to manipulate the hosting cluster, the following command will remove the workload that is kcp and KubeStellar.

helm delete kubestellar