Skip to content

KubeStellar Placement Translator

docs-ecutable - placement-translator   

Required Packages for running and using KubeStellar:

You will need the following tools to deploy and use KubeStellar. Select the tab for your environment for suggested commands to install them

  • curl (omitted from most OS-specific instructions)

  • jq

  • yq

  • kubectl (version range expected: 1.23-1.25)

  • helm (required when deploying as workload)

If you intend to build kubestellar from source you will also need

  • go (Go version >=1.19 required; 1.19 recommended) [go releases] (https://go.dev/dl)

  • for simplicity, here's a direct link to go releases Remember you need go v1.19 or greater; 1.19 recommended!

jq - https://stedolan.github.io/jq/download/
brew install jq
yq - https://github.com/mikefarah/yq#install
brew install yq
kubectl - https://kubernetes.io/docs/tasks/tools/ (version range expected: 1.23-1.25)
brew install kubectl
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
brew install helm
go (only required if you build kubestellar from source)

  1. Download the package from https://go.dev/dl#go1.19 Be sure to get the correct one for your architecture

  2. Open the package file you downloaded and follow the prompts to install Go. The package installs the Go distribution to /usr/local/go. The package should put the /usr/local/go/bin directory in your PATH environment variable. You may need to restart any open Terminal sessions for the change to take effect.

  3. Verify that you've installed Go by opening a command prompt and typing the following command: $ go version Confirm that the command prints the desired installed version of Go.

jq - https://stedolan.github.io/jq/download/
sudo apt-get install jq
yq - https://github.com/mikefarah/yq#install
sudo snap install yq
kubectl - https://kubernetes.io/docs/tasks/tools/ (version range expected: 1.23-1.25)
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/$(dpkg --print-architecture)/kubectl" && chmod +x kubectl && sudo mv ./kubectl /usr/local/bin/kubectl
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
go (only required if you build kubestellar from source)

visit https://go.dev/doc/install for latest instructions

  1. Remove any previous Go installation by deleting the /usr/local/go folder (if it exists), then extract the archive you just downloaded into /usr/local, creating a fresh Go tree in /usr/local/go:

    $ rm -rf /usr/local/go && tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz

    (You may need to run the command as root or through sudo).

    Do not untar the archive into an existing /usr/local/go tree. This is known to produce broken Go installations.

  2. Add /usr/local/go/bin to the PATH environment variable. You can do this by adding the following line to your $HOME/.profile or /etc/profile (for a system-wide installation):

    export PATH=$PATH:/usr/local/go/bin

    Note: Changes made to a profile file may not apply until the next time you log into your computer. To apply the changes immediately, just run the shell commands directly or execute them from the profile using a command such as source $HOME/.profile.

  3. Verify that you've installed Go by opening a command prompt and typing the following command:

    $ go version

  4. Confirm that the command prints the installed version of Go.

jq - https://stedolan.github.io/jq/download/
sudo apt-get install jq
yq - https://github.com/mikefarah/yq#install
sudo apt-get install yq
kubectl - https://kubernetes.io/docs/tasks/tools/ (version range expected: 1.23-1.25)
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/$(dpkg --print-architecture)/kubectl" && chmod +x kubectl && sudo mv ./kubectl /usr/local/bin/kubectl
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
go (only required if you build kubestellar from source)

visit https://go.dev/doc/install for latest instructions

  1. Remove any previous Go installation by deleting the /usr/local/go folder (if it exists), then extract the archive you just downloaded into /usr/local, creating a fresh Go tree in /usr/local/go:

    $ rm -rf /usr/local/go && tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz

    (You may need to run the command as root or through sudo).

    Do not untar the archive into an existing /usr/local/go tree. This is known to produce broken Go installations.

  2. Add /usr/local/go/bin to the PATH environment variable. You can do this by adding the following line to your $HOME/.profile or /etc/profile (for a system-wide installation):

    export PATH=$PATH:/usr/local/go/bin

    Note: Changes made to a profile file may not apply until the next time you log into your computer. To apply the changes immediately, just run the shell commands directly or execute them from the profile using a command such as source $HOME/.profile.

  3. Verify that you've installed Go by opening a command prompt and typing the following command:

    $ go version

  4. Confirm that the command prints the installed version of Go.

jq - https://stedolan.github.io/jq/download/
yum -y install jq
yq - https://github.com/mikefarah/yq#install
# easiest to install with snap
snap install yq
kubectl - https://kubernetes.io/docs/tasks/tools/ (version range expected: 1.23-1.25)
# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" && chmod +x kubectl && mv ./kubectl /usr/local/bin/kubectl
# for ARM64 / aarch64
[ $(uname -m) = aarch64 ] && curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl" && chmod +x kubectl && mv ./kubectl /usr/local/bin/kubectl
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
dnf install helm
go (only required if you build kubestellar from source)

visit https://go.dev/doc/install for latest instructions

  1. Remove any previous Go installation by deleting the /usr/local/go folder (if it exists), then extract the archive you just downloaded into /usr/local, creating a fresh Go tree in /usr/local/go:

    $ rm -rf /usr/local/go && tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz

    (You may need to run the command as root or through sudo).

    Do not untar the archive into an existing /usr/local/go tree. This is known to produce broken Go installations.

  2. Add /usr/local/go/bin to the PATH environment variable. You can do this by adding the following line to your $HOME/.profile or /etc/profile (for a system-wide installation):

    export PATH=$PATH:/usr/local/go/bin

    Note: Changes made to a profile file may not apply until the next time you log into your computer. To apply the changes immediately, just run the shell commands directly or execute them from the profile using a command such as source $HOME/.profile.

  3. Verify that you've installed Go by opening a command prompt and typing the following command:

    $ go version

  4. Confirm that the command prints the installed version of Go.

Chocolatey - https://chocolatey.org/install#individual
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
curl
choco install curl -y
jq - https://stedolan.github.io/jq/download/
choco install jq -y
yq - https://github.com/mikefarah/yq#install
choco install yq -y
kubectl - https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/ (version range expected: 1.23-1.25)
curl.exe -LO "https://dl.k8s.io/release/v1.27.2/bin/windows/amd64/kubectl.exe"    
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
choco install kubernetes-helm
go (only required if you build kubestellar from source)
visit https://go.dev/doc/install for latest instructions

  1. Download the go 1.19 MSI package from https://go.dev/dl#go1.19 Be sure to get the correct one for your architecture

  2. Open the MSI file you downloaded and follow the prompts to install Go.

    By default, the installer will install Go to Program Files or Program Files (x86). You can change the location as needed. After installing, you will need to close and reopen any open command prompts so that changes to the environment made by the installer are reflected at the command prompt.

  3. Verify that you've installed Go:

    1. In Windows, click the Start menu.

    2. In the menu's search box, type cmd, then press the Enter key.

    3. In the Command Prompt window that appears, type the following command: $ go version

    4. Confirm that the command prints the installed version of Go.

How to install pre-requisites for a Windows Subsystem for Linux (WSL) envronment using an Ubuntu 22.04.01 distribution

(Tested on a Intel(R) Core(TM) i7-9850H CPU @ 2.60GHz 2.59 GHz with 32GB RAM, a 64-bit operating system, x64-based processor Using Windows 11 Enterprise)

1. If you're using a VPN, turn it off

2. Install Ubuntu into WSL

2.0 If wsl is not yet installed, open a powershell administrator window and run the following
wsl --install
2.1 reboot your system

2.2 In a Windows command terminal run the following to list all the linux distributions that are available online
wsl -l -o
2.3 Select a linux distribution and install it into WSL
wsl --install -d Ubuntu 22.04.01
You will see something like:
Installing, this may take a few minutes...
Please create a default UNIX user account. The username does not need to match your Windows username.
For more information visit: https://aka.ms/wslusers
Enter new UNIX username:

2.4 Enter your new username and password at the prompts, and you will eventually see something like:
Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.10.102.1-microsoft-standard-WSL2 x86_64)

2.5 Click on the Windows "Start" icon and type in the name of your distribution into the search box. Your new linux distribution should appear as a local "App". You can pin it to the Windows task bar or to Start for your future convenience.
Start a VM using your distribution by clicking on the App.

3. Install pre-requisites into your new VM
3.1 update and apply apt-get packages
sudo apt-get update
sudo apt-get upgrade

3.2 Install golang
wget https://golang.org/dl/go1.19.linux-amd64.tar.gz
sudo tar -zxvf go1.19.linux-amd64.tar.gz -C /usr/local
echo export GOROOT=/usr/local/go | sudo tee -a /etc/profile
echo export PATH="$PATH:/usr/local/go/bin" | sudo tee -a /etc/profile
source /etc/profile
go version

3.3 Install ko (but don't do ko set action step)
go install github.com/google/ko@latest

3.4 Install gcc
Either run this:
sudo apt install build-essential
or this:
sudo apt-get update
apt install gcc
gcc --version

3.5 Install make (if you installed build-essential this may already be installed)
apt install make

3.6 Install jq
DEBIAN_FRONTEND=noninteractive apt-get install -y jq
jq --version

3.7 install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256)  kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
3.8 install helm (required when deploying as workload)
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

Required Packages for the example usage:

You will need the following tools for the example usage of KubeStellar in this quickstart example. Select the tab for your environment for suggested commands to install them

docker - https://docs.docker.com/engine/install/
brew install docker
open -a Docker
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
brew install kind

docker - https://docs.docker.com/engine/install/
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
Enable rootless usage of Docker (requires relogin) - https://docs.docker.com/engine/security/rootless/
sudo apt-get install -y dbus-user-session # *** Relogin after this
sudo apt-get install -y uidmap
dockerd-rootless-setuptool.sh install
systemctl --user restart docker.service
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-$(dpkg --print-architecture) && chmod +x ./kind && sudo mv ./kind /usr/local/bin

docker - https://docs.docker.com/engine/install/
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

# Add the repository to Apt sources:
echo \
  "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

# Install packages
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Enable rootless usage of Docker (requires relogin) - https://docs.docker.com/engine/security/rootless/
sudo apt-get install -y dbus-user-session # *** Relogin after this
sudo apt-get install -y fuse-overlayfs
sudo apt-get install -y slirp4netns
dockerd-rootless-setuptool.sh install
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-linux-$(dpkg --print-architecture) && chmod +x ./kind && sudo mv ./kind /usr/local/bin

docker - https://docs.docker.com/engine/install/
yum -y install epel-release && yum -y install docker && systemctl enable --now docker && systemctl status docker
Enable rootless usage of Docker by following the instructions at https://docs.docker.com/engine/security/rootless/
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-linux-amd64
# For ARM64
[ $(uname -m) = aarch64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-linux-arm64 
chmod +x ./kind && sudo mv ./kind /usr/local/bin/kind

docker - https://docs.docker.com/engine/install/
choco install docker -y
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.14.0/kind-windows-amd64

How to install docker and kind into a Windows Subsystem for Linux (WSL) environment using an Ubuntu 22.04.01 distribution

1.0 Start a VM terminal by clicking on the App you configured using the instructions in the General pre-requisites described above.

2.0 Install docker
The installation instructions from docker are not sufficient to get docker working with WSL

2.1 Follow instructions here to install docker https://docs.docker.com/engine/install/ubuntu/

Here some additional steps you will need to take:

2.2 Ensure that /etc/wsl.conf is configured so that systemd will run on booting.
If /etc/wsl.conf does not contain [boot] systemd=true, then edit /etc/wsl.com as follows:
sudo vi /etc/wsl.conf
Insert
[boot]
systemd=true

2.3 Edit /etc/sudoers: it is strongly recommended to not add directives directly to /etc/sudoers, but instead to put them in files in /etc/sudoers.d which are auto-included. So make/modify a new file via
sudo vi /etc/sudoers.d/docker
Insert
# Docker daemon specification
<your user account> ALL=(ALL) NOPASSWD: /usr/bin/dockerd

2.4 Add your user to the docker group
sudo usermod -aG docker $USER

2.5 If dockerd is already running, then stop it and restart it as follows (note: the new dockerd instance will be running in the foreground):
sudo systemctl stop docker
sudo dockerd &

2.5.1 If you encounter an iptables issue, which is described here: https://github.com/microsoft/WSL/issues/6655 The following commands will fix the issue:
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
sudo dockerd & 

3. You will now need to open new terminals to access the VM since dockerd is running in the foreground of this terminal

3.1 In your new terminal, install kind
wget -nv https://github.com/kubernetes-sigs/kind/releases/download/v0.17.0/kind-linux-$(dpkg --print-architecture) -O kind 
sudo install -m 0755 kind /usr/local/bin/kind 
rm kind 
kind version

This document is 'docs-ecutable' - you can 'run' this document, just like we do in our testing, on your local environment

git clone -b release-0.14 https://github.com/kubestellar/kubestellar
cd kubestellar
make MANIFEST="'docs/content/common-subs/pre-req.md','docs/content/Coding Milestones/PoC2023q1/placement-translator.md'" docs-ecutable
# done? remove everything
make MANIFEST="docs/content/common-subs/remove-all.md" docs-ecutable
cd ..
rm -rf kubestellar

The placement translator runs in the center and translates EMC placement problems into edge sync problems.

Status#

The placement translator is a work in progress. It maintains SyncerConfig objects and downsynced objects in mailbox workspaces, albeit with limitations discussed in the next section.

Additional Design Details#

The placement translator maintains one SyncerConfig object in each mailbox workspace. That object is named the-one. Other SyncerConfig objects may exist; the placement translator ignores them.

The placement translator responds to each resource discovery independently. This makes the behavior jaggy and the logging noisy. For example, it means that the SyncerConfig objects may be rewritten for each resource discovery. But eventually the right things happen.

The placement translator does not yet attempt the full prescribed technique for picking the API version to use when reading and writing. Currently it looks only at the preferred version reported in each workload management workspace, and only succeeds if they all agree.

One detail left vague in the design outline is what constitutes the "desired state" that propagates from center to edge. The easy obvious answer is the "spec" section of downsynced objects, but that answer ignores some issues. Following is the current full answer.

When creating a workload object in a mailbox workspace, the placement translator uses a copy of the object read from the workload management workspace but with the following changes.

  • The metadata.managedFields is emptied.
  • The metadata.resourceVersion is emptied.
  • The metadata.selfLlink is emptied.
  • The metadata.uid is emptied.
  • The metadata.ownerReferences is emptied. (Doing better would require tracking UID mappings from WMW to MBWS.)
  • In metadata.labels, edge.kubestellar.io/projected=yes is added.

The placement translator does not react to changes to the workload objects in the mailbox workspace.

When downsyncing desired state and the placement translator finds the object already exists in the mailbox workspace, the placement translator does an HTTP PUT (Update in the k8s.io/client-go/dynamic package) using an object value --- called below the "destination" object --- constructed by reading the object from the MBWS and making the following changes.

  • For top-level sections in the source object other than apiVersion, kind, metadata, and status, the destination object gets the same contents for that section.
  • If the source object has some annotations then they are merged into the destination object annotations as follows.
  • A destination annotation that has no corresponding annotation in the source is unchanged.
  • A destination annotation that has the same value as the corresponding annotation in the source is unchanged.
  • A "system" annotation is unchanged. The system annotations are those whose key (a) starts with kcp.io/ or other stuff followed by .kcp.io/ and (b) does not start with edge.kubestellar.io/.
  • The source object's labels are merged into the destination object using the same rules as for annotations, and edge.kubestellar.io/projected is set to yes.
  • The remainder of the metadata is unchanged.

For objects --- other than Namespace objects --- that exist in a mailbox workspace and whose API GroupResource has been relevant to the placement translator since it started, ones that have the edge.kubestellar.io/projected=yes label but are not currently desired are deleted. The exclusion for Namespace objects is there because the placement translator does not take full ownership of them, rather it takes the position that there might be other parties that create Namespace objects or rely on their existence.

Usage#

The placement translator needs two kube client configurations. One points to the edge service provider workspace and provides authority to (a) read the APIExport view of the edge API and (b) write into the mailbox workspaces. The other points to the kcp server base (i.e., does not identify a particular logical cluster nor *) and is authorized to read all clusters. In the kubeconfig created by kcp start that is satisfied by the context named system:admin.

The command line flags, beyond the basics, are as follows. For a string parameter, if no default is explicitly stated then the default is the empty string, which usually means "not specified here". For both kube client configurations, the usual rules apply: first consider command line parameters, then $KUBECONFIG, then ~/.kube/config.

      --allclusters-cluster string       The name of the kubeconfig cluster to use for access to all clusters
      --allclusters-context string       The name of the kubeconfig context to use for access to all clusters (default "system:admin")
      --allclusters-kubeconfig string    Path to the kubeconfig file to use for access to all clusters
      --allclusters-user string          The name of the kubeconfig user to use for access to all clusters

      --espw-cluster string              The name of the kubeconfig cluster to use for access to the edge service provider workspace
      --espw-context string              The name of the kubeconfig context to use for access to the edge service provider workspace
      --espw-kubeconfig string           Path to the kubeconfig file to use for access to the edge service provider workspace
      --espw-user string                 The name of the kubeconfig user to use for access to the edge service provider workspace

      --root-cluster string              The name of the kubeconfig cluster to use for access to root workspace
      --root-context string              The name of the kubeconfig context to use for access to root workspace (default "root")
      --root-kubeconfig string           Path to the kubeconfig file to use for access to root workspace
      --root-user string                 The name of the kubeconfig user to use for access to root workspace

      --server-bind-address ipport       The IP address with port at which to serve /metrics and /debug/pprof/ (default :10204)

Try It#

The nascent placement translator can be exercised following the scenario in example1. You will need to run the where resolver and mailbox controller long enough for them to create what this scenario calls for, but they can be terminated after that.

Boxes and arrows. Two kind clusters exist, named florin and guilder. The Inventory Management workspace contains two pairs of SyncTarget and Location objects. The Edge Service Provider workspace contains the PoC controllers; the mailbox controller reads the SyncTarget objects and creates two mailbox workspaces.

Stage 1 creates the infrastructure and the edge service provider workspace (ESPW) and lets that react to the inventory. Then the KubeStellar syncers are deployed, in the edge clusters and configured to work with the corresponding mailbox workspaces. This stage has the following steps.

Create two kind clusters.#

This example uses two kind clusters as edge clusters. We will call them "florin" and "guilder".

This example uses extremely simple workloads, which use hostPort networking in Kubernetes. To make those ports easily reachable from your host, this example uses an explicit kind configuration for each edge cluster.

For the florin cluster, which will get only one workload, create a file named florin-config.yaml with the following contents. In a kind config file, containerPort is about the container that is also a host (a Kubernetes node), while the hostPort is about the host that hosts that container.

cat > florin-config.yaml << EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 8081
    hostPort: 8094
EOF

For the guilder cluster, which will get two workloads, create a file named guilder-config.yaml with the following contents. The workload that uses hostPort 8081 goes in both clusters, while the workload that uses hostPort 8082 goes only in the guilder cluster.

cat > guilder-config.yaml << EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 8081
    hostPort: 8096
  - containerPort: 8082
    hostPort: 8097
EOF

Finally, create the two clusters with the following two commands, paying attention to $KUBECONFIG and, if that's empty, ~/.kube/config: kind create will inject/replace the relevant "context" in your active kubeconfig.

kind create cluster --name florin --config florin-config.yaml
kind create cluster --name guilder --config guilder-config.yaml

Deploy kcp and KubeStellar#

You need kcp and KubeStellar and can deploy them in either of two ways: as bare processes on whatever host you are using to run this example, or as workload in a Kubernetes cluster (an OpenShift cluster qualifies). Do one or the other, not both.

KubeStellar only works with release v0.11.0 of kcp. To downsync ServiceAccount objects you will need a patched version of that in order to get the denaturing of them as discussed in the design outline.

Deploy kcp and KubeStellar as bare processes#

Start kcp#

The following commands fetch the appropriate kcp server and plugins for your OS and ISA and download them and put them on your $PATH.

rm -rf kcp
mkdir kcp
pushd kcp
(
  set -x
  case "$OSTYPE" in
      linux*)   os_type="linux" ;;
      darwin*)  os_type="darwin" ;;
      *)        echo "Unsupported operating system type: $OSTYPE" >&2
                false ;;
  esac
  case "$HOSTTYPE" in
      x86_64*)  arch_type="amd64" ;;
      aarch64*) arch_type="arm64" ;;
      arm64*)   arch_type="arm64" ;;
      *)        echo "Unsupported architecture type: $HOSTTYPE" >&2
                false ;;
  esac
  kcp_version=v0.11.0
  trap "rm kcp.tar.gz kcp-plugins.tar.gz" EXIT
  curl -SL -o kcp.tar.gz "https://github.com/kubestellar/kubestellar/releases/download/v0.12.0/kcp_0.11.0_${os_type}_${arch_type}.tar.gz"
  curl -SL -o kcp-plugins.tar.gz "https://github.com/kcp-dev/kcp/releases/download/${kcp_version}/kubectl-kcp-plugin_${kcp_version//v}_${os_type}_${arch_type}.tar.gz"
  tar -xzf kcp-plugins.tar.gz
  tar -xzf kcp.tar.gz
)
export PATH=$(pwd)/bin:$PATH

Running the kcp server creates a hidden subdirectory named .kcp to hold all sorts of state related to the server. If you have run it before and want to start over from scratch then you should rm -rf .kcp first.

Use the following commands to: (a) run the kcp server in a forked command, (b) update your KUBECONFIG environment variable to configure kubectl to use the kubeconfig produced by the kcp server, and (c) wait for the kcp server to get through some initialization. The choice of -v=3 for the kcp server makes it log a line for every HTTP request (among other things).

kcp start -v=3 &> /tmp/kcp.log &
export KUBECONFIG=$(pwd)/.kcp/admin.kubeconfig
popd
# wait until KCP is ready checking availability of ws resource
while ! kubectl ws tree &> /dev/null; do
  sleep 10
done

Note that you now care about two different kubeconfig files: the one that you were using earlier, which holds the contexts for your kind clusters, and the one that the kcp server creates. The remainder of this document assumes that your kind cluster contexts are in ~/.kube/config.

Get KubeStellar#

You will need a local copy of KubeStellar. You can either use the pre-built archive (containing executables and config files) from a release or get any desired version from GitHub and build.

Use pre-built archive#

Fetch the archive for your operating system and instruction set architecture as follows, in which $kubestellar_version is your chosen release of KubeStellar (see the releases on GitHub) and $os_type and $arch_type are chosen according to the list of "assets" for your chosen release.

curl -SL -o kubestellar.tar.gz "https://github.com/kubestellar/kubestellar/releases/download/${kubestellar_version}/kubestellar_${kubestellar_version}_${os_type}_${arch_type}.tar.gz
tar xzf kubestellar.tar.gz
export PATH=$PWD/bin:$PATH
Get from GitHub#

You can get the latest version from GitHub with the following command, which will get you the default branch (which is named "main"); add -b $branch to the git command in order to get a different branch.

git clone https://github.com/kubestellar/kubestellar
cd kubestellar

Use the following commands to build and add the executables to your $PATH.

make build
export PATH=$(pwd)/bin:$PATH

In the following exhibited command lines, the commands described as "KubeStellar commands" and the commands that start with kubectl kubestellar rely on the KubeStellar bin directory being on the $PATH. Alternatively you could invoke them with explicit pathnames. The kubectl plugin lines use fully specific executables (e.g., kubectl kubestellar prep-for-syncer corresponds to bin/kubectl-kubestellar-prep_for_syncer).

Initialize the KubeStellar platform as bare processes#

In this step KubeStellar creates and populates the Edge Service Provider Workspace (ESPW), which exports the KubeStellar API, and also augments the root:compute workspace from kcp TMC as needed here. That augmentation consists of adding authorization to update the relevant /status and /scale subresources (missing in kcp TMC) and extending the supported subset of the Kubernetes API for managing containerized workloads from the four resources built into kcp TMC (Deployment, Pod, Service, and Ingress) to the other ones that are meaningful in KubeStellar.

kubestellar init

Deploy kcp and KubeStellar as a workload in a Kubernetes cluster#

(This style of deployment requires release v0.6 or later of KubeStellar.)

You need a Kubernetes cluster; see the documentation for kubectl kubestellar deploy for more information.

You will need a domain name that, on each of your clients, resolves to an IP address that the client can use to open a TCP connection to the Ingress controller's listening socket.

You will need the kcp kubectl plugins. See the "Start kcp" section above for instructions on how to get all of the kcp executables.

You will need to get a build of KubeStellar. See above.

To do the deployment and prepare to use it you will be using the commands defined for that. These require your shell to be in a state where kubectl manipulates the hosting cluster (the Kubernetes cluster into which you want to deploy kcp and KubeStellar), either by virtue of having set your KUBECONFIG envar appropriately or putting the relevant contents in ~/.kube/config or by passing --kubeconfig explicitly on the following command lines.

Use the kubectl kubestellar deploy command to do the deployment.

Then use the kubectl kubestellar get-external-kubeconfig command to put into a file the kubeconfig that you will use as a user of kcp and KubeStellar. Do not overwrite the kubeconfig file for your hosting cluster. But do update your KUBECONFIG envar setting or remember to pass the new file with --kubeconfig on the command lines when using kcp or KubeStellar. For example, you might use the following commands to fetch and start using that kubeconfig file; the first assumes that you deployed the core into a Kubernetes namespace named "kubestellar".

kubectl kubestellar get-external-kubeconfig -n kubestellar -o kcs.kubeconfig
export KUBECONFIG=$(pwd)/kcs.kubeconfig

Note that you now care about two different kubeconfig files: the one that you were using earlier, which holds the contexts for your kind clusters, and the one that you just fetched and started using for working with the KubeStellar interface. The remainder of this document assumes that your kind cluster contexts are in ~/.kube/config.

Create SyncTarget and Location objects to represent the florin and guilder clusters#

Use the following two commands to put inventory objects in the IMW at root:imw1 that was automatically created during deployment of KubeStellar. They label both florin and guilder with env=prod, and also label guilder with extended=yes.

kubectl ws root:imw1
kubectl kubestellar ensure location florin  loc-name=florin  env=prod
kubectl kubestellar ensure location guilder loc-name=guilder env=prod extended=yes
echo "decribe the florin location object"
kubectl describe location.edge.kubestellar.io florin

Those two script invocations are equivalent to creating the following four objects plus the kcp APIBinding objects that import the definition of the KubeStellar API.

apiVersion: edge.kubestellar.io/v2alpha1
kind: SyncTarget
metadata:
  name: florin
  labels:
    id: florin
    loc-name: florin
    env: prod
---
apiVersion: edge.kubestellar.io/v2alpha1
kind: Location
metadata:
  name: florin
  labels:
    loc-name: florin
    env: prod
spec:
  resource: {group: edge.kubestellar.io, version: v2alpha1, resource: synctargets}
  instanceSelector:
    matchLabels: {id: florin}
---
apiVersion: edge.kubestellar.io/v2alpha1
kind: SyncTarget
metadata:
  name: guilder
  labels:
    id: guilder
    loc-name: guilder
    env: prod
    extended: yes
---
apiVersion: edge.kubestellar.io/v2alpha1
kind: Location
metadata:
  name: guilder
  labels:
    loc-name: guilder
    env: prod
    extended: yes
spec:
  resource: {group: edge.kubestellar.io, version: v2alpha1, resource: synctargets}
  instanceSelector:
    matchLabels: {id: guilder}

That script also deletes the Location named default, which is not used in this PoC, if it shows up.

Continue to follow the steps until the start of Stage 3 of the exercise.

The mailbox controller#

The mailbox controller is one of the central controllers of KubeStellar. If you have deployed the KubeStellar core as Kubernetes workload then this controller is already running in a pod in your hosting cluster. If instead you are running these controllers as bare processes then launch this controller as follows.

kubectl ws root:espw
mailbox-controller -v=2 &
sleep 10

This controller is in charge of maintaining the collection of mailbox workspaces, which are an implementation detail not intended for user consumption. You can use the following command to wait for the appearance of the mailbox workspaces implied by the florin and guilder SyncTarget objects that you made earlier.

kubectl ws root
while [ $(kubectl ws tree | grep "\-mb\-" | wc -l) -ne 2 ]; do
  sleep 10
done

If it is working correctly, lines like the following will appear in the controller's log (which is being written into your shell if you ran the controller as a bare process above, otherwise you can fetch as directed).

...
I0721 17:37:10.186848  189094 main.go:206] "Found APIExport view" exportName="e
dge.kubestellar.io" serverURL="https://10.0.2.15:6443/services/apiexport/cseslli1ddit3s
a5/edge.kubestellar.io"
...
I0721 19:17:21.906984  189094 controller.go:300] "Created APIBinding" worker=1
mbwsName="1d55jhazpo3d3va6-mb-551bebfd-b75e-47b1-b2e0-ff0a4cb7e006" mbwsCluster
="32x6b03ixc49cj48" bindingName="bind-edge" resourceVersion="1247"
...
I0721 19:18:56.203057  189094 controller.go:300] "Created APIBinding" worker=0
mbwsName="1d55jhazpo3d3va6-mb-732cf72a-1ca9-4def-a5e7-78fd0e36e61c" mbwsCluster
="q31lsrpgur3eg9qk" bindingName="bind-edge" resourceVersion="1329"
^C

You need a -v setting of 2 or numerically higher to get log messages about individual mailbox workspaces.

A mailbox workspace name is distinguished by -mb- separator. You can get a listing of those mailbox workspaces as follows.

kubectl ws root
kubectl get Workspaces
NAME                                                       TYPE          REGION   PHASE   URL                                                     AGE
1t82bk54r6gjnzsp-mb-1a045336-8178-4026-8a56-5cd5609c0ec1   universal              Ready   https://192.168.58.123:6443/clusters/1najcltzt2nqax47   50s
1t82bk54r6gjnzsp-mb-f0a82ab1-63f4-49ea-954d-3a41a35a9f1c   universal              Ready   https://192.168.58.123:6443/clusters/1y7wll1dz806h3sb   50s
compute                                                    universal              Ready   https://172.20.144.39:6443/clusters/root:compute        6m8s
espw                                                       organization           Ready   https://172.20.144.39:6443/clusters/root:espw           2m4s
imw1                                                       organization           Ready   https://172.20.144.39:6443/clusters/root:imw1           1m9s

More usefully, using custom columns you can get a listing that shows the name of the associated SyncTarget.

kubectl get Workspace -o "custom-columns=NAME:.metadata.name,SYNCTARGET:.metadata.annotations['edge\.kubestellar\.io/sync-target-name'],CLUSTER:.spec.cluster"
NAME                                                       SYNCTARGET   CLUSTER
1t82bk54r6gjnzsp-mb-1a045336-8178-4026-8a56-5cd5609c0ec1   florin       1najcltzt2nqax47
1t82bk54r6gjnzsp-mb-f0a82ab1-63f4-49ea-954d-3a41a35a9f1c   guilder      1y7wll1dz806h3sb
compute                                                    <none>       mqnl7r5f56hswewy
espw                                                       <none>       2n88ugkhysjbxqp5
imw1                                                       <none>       4d2r9stcyy2qq5c1

Also: if you ever need to look up just one mailbox workspace by SyncTarget name, you could do it as follows.

GUILDER_WS=$(kubectl get Workspace -o json | jq -r '.items | .[] | .metadata | select(.annotations ["edge.kubestellar.io/sync-target-name"] == "guilder") | .name')
echo The guilder mailbox workspace name is $GUILDER_WS
The guilder mailbox workspace name is 1t82bk54r6gjnzsp-mb-f0a82ab1-63f4-49ea-954d-3a41a35a9f1c

FLORIN_WS=$(kubectl get Workspace -o json | jq -r '.items | .[] | .metadata | select(.annotations ["edge.kubestellar.io/sync-target-name"] == "florin") | .name')
echo The florin mailbox workspace name is $FLORIN_WS
The florin mailbox workspace name is 1t82bk54r6gjnzsp-mb-1a045336-8178-4026-8a56-5cd5609c0ec1

Connect guilder edge cluster with its mailbox workspace#

The following command will (a) create, in the mailbox workspace for guilder, an identity and authorizations for the edge syncer and (b) write a file containing YAML for deploying the syncer in the guilder cluster.

kubectl kubestellar prep-for-syncer --imw root:imw1 guilder
Current workspace is "root:imw1".
Current workspace is "root:espw".
Current workspace is "root:espw:1t82bk54r6gjnzsp-mb-f0a82ab1-63f4-49ea-954d-3a41a35a9f1c" (type root:universal).
Creating service account "kubestellar-syncer-guilder-wfeig2lv"
Creating cluster role "kubestellar-syncer-guilder-wfeig2lv" to give service account "kubestellar-syncer-guilder-wfeig2lv"

 1. write and sync access to the synctarget "kubestellar-syncer-guilder-wfeig2lv"
 2. write access to apiresourceimports.

Creating or updating cluster role binding "kubestellar-syncer-guilder-wfeig2lv" to bind service account "kubestellar-syncer-guilder-wfeig2lv" to cluster role "kubestellar-syncer-guilder-wfeig2lv".

Wrote WEC manifest to guilder-syncer.yaml for namespace "kubestellar-syncer-guilder-wfeig2lv". Use

  KUBECONFIG=<workload-execution-cluster-config> kubectl apply -f "guilder-syncer.yaml"

to apply it. Use

  KUBECONFIG=<workload-execution-cluster-config> kubectl get deployment -n "kubestellar-syncer-guilder-wfeig2lv" kubestellar-syncer-guilder-wfeig2lv

to verify the syncer pod is running.
Current workspace is "root:espw".

The file written was, as mentioned in the output, guilder-syncer.yaml. Next kubectl apply that to the guilder cluster. That will look something like the following; adjust as necessary to make kubectl manipulate your guilder cluster.

KUBECONFIG=~/.kube/config kubectl --context kind-guilder apply -f guilder-syncer.yaml
namespace/kubestellar-syncer-guilder-wfeig2lv created
serviceaccount/kubestellar-syncer-guilder-wfeig2lv created
secret/kubestellar-syncer-guilder-wfeig2lv-token created
clusterrole.rbac.authorization.k8s.io/kubestellar-syncer-guilder-wfeig2lv created
clusterrolebinding.rbac.authorization.k8s.io/kubestellar-syncer-guilder-wfeig2lv created
secret/kubestellar-syncer-guilder-wfeig2lv created
deployment.apps/kubestellar-syncer-guilder-wfeig2lv created

You might check that the syncer is running, as follows.

KUBECONFIG=~/.kube/config kubectl --context kind-guilder get deploy -A
NAMESPACE                          NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
kubestellar-syncer-guilder-saaywsu5   kubestellar-syncer-guilder-saaywsu5   1/1     1            1           52s
kube-system                        coredns                            2/2     2            2           35m
local-path-storage                 local-path-provisioner             1/1     1            1           35m

Connect florin edge cluster with its mailbox workspace#

Do the analogous stuff for the florin cluster.

kubectl kubestellar prep-for-syncer --imw root:imw1 florin
Current workspace is "root:imw1".
Current workspace is "root:espw".
Current workspace is "root:espw:1t82bk54r6gjnzsp-mb-1a045336-8178-4026-8a56-5cd5609c0ec1" (type root:universal).
Creating service account "kubestellar-syncer-florin-32uaph9l"
Creating cluster role "kubestellar-syncer-florin-32uaph9l" to give service account "kubestellar-syncer-florin-32uaph9l"

 1. write and sync access to the synctarget "kubestellar-syncer-florin-32uaph9l"
 2. write access to apiresourceimports.

Creating or updating cluster role binding "kubestellar-syncer-florin-32uaph9l" to bind service account "kubestellar-syncer-florin-32uaph9l" to cluster role "kubestellar-syncer-florin-32uaph9l".

Wrote WEC manifest to florin-syncer.yaml for namespace "kubestellar-syncer-florin-32uaph9l". Use

  KUBECONFIG=<workload-execution-cluster-config> kubectl apply -f "florin-syncer.yaml"

to apply it. Use

  KUBECONFIG=<workload-execution-cluster-config> kubectl get deployment -n "kubestellar-syncer-florin-32uaph9l" kubestellar-syncer-florin-32uaph9l

to verify the syncer pod is running.
Current workspace is "root:espw".

And deploy the syncer in the florin cluster.

KUBECONFIG=~/.kube/config kubectl --context kind-florin apply -f florin-syncer.yaml 
namespace/kubestellar-syncer-florin-32uaph9l created
serviceaccount/kubestellar-syncer-florin-32uaph9l created
secret/kubestellar-syncer-florin-32uaph9l-token created
clusterrole.rbac.authorization.k8s.io/kubestellar-syncer-florin-32uaph9l created
clusterrolebinding.rbac.authorization.k8s.io/kubestellar-syncer-florin-32uaph9l created
secret/kubestellar-syncer-florin-32uaph9l created
deployment.apps/kubestellar-syncer-florin-32uaph9l created

Stage 2#

Placement and Where Resolving

Stage 2 creates two workloads, called "common" and "special", and lets the Where Resolver react. It has the following steps.

Create and populate the workload management workspace for the common workload#

One of the workloads is called "common", because it will go to both edge clusters. The other one is called "special".

In this example, each workload description goes in its own workload management workspace (WMW). Start by creating a WMW for the common workload, with the following commands.

kubectl ws root
kubectl kubestellar ensure wmw wmw-c

This is equivalent to creating that workspace and then entering it and creating the following two APIBinding objects.

apiVersion: apis.kcp.io/v1alpha1
kind: APIBinding
metadata:
  name: bind-espw
spec:
  reference:
    export:
      path: root:espw
      name: edge.kubestellar.io
---
apiVersion: apis.kcp.io/v1alpha1
kind: APIBinding
metadata:
  name: bind-kube
spec:
  reference:
    export:
      path: "root:compute"
      name: kubernetes
sleep 15

Next, use kubectl to create the following workload objects in that workspace. The workload in this example in an Apache httpd server that serves up a very simple web page, conveyed via a Kubernetes ConfigMap that is mounted as a volume for the httpd pod.

kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: commonstuff
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: commonstuff
  name: httpd-htdocs
  annotations:
    edge.kubestellar.io/expand-parameters: "true"
data:
  index.html: |
    <!DOCTYPE html>
    <html>
      <body>
        This is a common web site.
        Running in %(loc-name).
      </body>
    </html>
---
apiVersion: edge.kubestellar.io/v2alpha1
kind: Customizer
metadata:
  namespace: commonstuff
  name: example-customizer
  annotations:
    edge.kubestellar.io/expand-parameters: "true"
replacements:
- path: "$.spec.template.spec.containers.0.env.0.value"
  value: '"env is %(env)"'
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  namespace: commonstuff
  name: commond
  annotations:
    edge.kubestellar.io/customizer: example-customizer
spec:
  selector: {matchLabels: {app: common} }
  template:
    metadata:
      labels: {app: common}
    spec:
      containers:
      - name: httpd
        env:
        - name: EXAMPLE_VAR
          value: example value
        image: library/httpd:2.4
        ports:
        - name: http
          containerPort: 80
          hostPort: 8081
          protocol: TCP
        volumeMounts:
        - name: htdocs
          readOnly: true
          mountPath: /usr/local/apache2/htdocs
      volumes:
      - name: htdocs
        configMap:
          name: httpd-htdocs
          optional: false
EOF
sleep 10

Finally, use kubectl to create the following EdgePlacement object. Its "where predicate" (the locationSelectors array) has one label selector that matches both Location objects created earlier, thus directing the common workload to both edge clusters.

kubectl apply -f - <<EOF
apiVersion: edge.kubestellar.io/v2alpha1
kind: EdgePlacement
metadata:
  name: edge-placement-c
spec:
  locationSelectors:
  - matchLabels: {"env":"prod"}
  downsync:
  - apiGroup: ""
    resources: [ configmaps ]
    namespaces: [ commonstuff ]
    objectNames: [ httpd-htdocs ]
  - apiGroup: apps
    resources: [ replicasets ]
    namespaces: [ commonstuff ]
  wantSingletonReportedState: true
  upsync:
  - apiGroup: "group1.test"
    resources: ["sprockets", "flanges"]
    namespaces: ["orbital"]
    names: ["george", "cosmo"]
  - apiGroup: "group2.test"
    resources: ["cogs"]
    names: ["william"]
EOF
sleep 10

Create and populate the workload management workspace for the special workload#

Use the following kubectl commands to create the WMW for the special workload.

kubectl ws root
kubectl kubestellar ensure wmw wmw-s

In this workload we will also demonstrate how to downsync objects whose kind is defined by a CustomResourceDefinition object. We will use the one from the Kubernetes documentation for CRDs, modified so that the resource it defines is in the category all. First, create the definition object with the following command.

kubectl apply -f - <<EOF
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  # name must match the spec fields below, and be in the form: <plural>.<group>
  name: crontabs.stable.example.com
spec:
  # group name to use for REST API: /apis/<group>/<version>
  group: stable.example.com
  # list of versions supported by this CustomResourceDefinition
  versions:
    - name: v1
      # Each version can be enabled/disabled by Served flag.
      served: true
      # One and only one version must be marked as the storage version.
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              properties:
                cronSpec:
                  type: string
                image:
                  type: string
                replicas:
                  type: integer
  # either Namespaced or Cluster
  scope: Namespaced
  names:
    # plural name to be used in the URL: /apis/<group>/<version>/<plural>
    plural: crontabs
    # singular name to be used as an alias on the CLI and for display
    singular: crontab
    # kind is normally the CamelCased singular type. Your resource manifests use this.
    kind: CronTab
    # shortNames allow shorter string to match your resource on the CLI
    shortNames:
    - ct
    categories:
    - all
EOF

Next, use the following command to wait for the apiserver to process that definition.

kubectl wait --for condition=Established crd crontabs.stable.example.com

Next, use kubectl to create the following workload objects in that workspace. The APIService object included here does not contribute to the httpd workload but is here to demonstrate that APIService objects can be downsynced.

kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: specialstuff
  labels: {special: "yes"}
  annotations: {just-for: fun}
---
apiVersion: "stable.example.com/v1"
kind: CronTab
metadata:
  name: my-new-cron-object
  namespace: specialstuff
spec:
  cronSpec: "* * * * */5"
  image: my-awesome-cron-image
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: specialstuff
  name: httpd-htdocs
  annotations:
    edge.kubestellar.io/expand-parameters: "true"
data:
  index.html: |
    <!DOCTYPE html>
    <html>
      <body>
        This is a special web site.
        Running in %(loc-name).
      </body>
    </html>
---
apiVersion: edge.kubestellar.io/v2alpha1
kind: Customizer
metadata:
  namespace: specialstuff
  name: example-customizer
  annotations:
    edge.kubestellar.io/expand-parameters: "true"
replacements:
- path: "$.spec.template.spec.containers.0.env.0.value"
  value: '"in %(env) env"'
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: specialstuff
  name: speciald
  annotations:
    edge.kubestellar.io/customizer: example-customizer
spec:
  selector: {matchLabels: {app: special} }
  template:
    metadata:
      labels: {app: special}
    spec:
      containers:
      - name: httpd
        env:
        - name: EXAMPLE_VAR
          value: example value
        image: library/httpd:2.4
        ports:
        - name: http
          containerPort: 80
          hostPort: 8082
          protocol: TCP
        volumeMounts:
        - name: htdocs
          readOnly: true
          mountPath: /usr/local/apache2/htdocs
      volumes:
      - name: htdocs
        configMap:
          name: httpd-htdocs
          optional: false
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  name: v1090.example.my
spec:
  group: example.my
  groupPriorityMinimum: 360
  service:
    name: my-service
    namespace: my-example
  version: v1090
  versionPriority: 42
EOF
sleep 10

Finally, use kubectl to create the following EdgePlacement object. Its "where predicate" (the locationSelectors array) has one label selector that matches only one of the Location objects created earlier, thus directing the special workload to just one edge cluster.

The "what predicate" explicitly includes the Namespace object named "specialstuff", which causes all of its desired state (including labels and annotations) to be downsynced. This contrasts with the common EdgePlacement, which does not explicitly mention the commonstuff namespace, relying on the implicit creation of namespaces as needed in the WECs.

kubectl apply -f - <<EOF
apiVersion: edge.kubestellar.io/v2alpha1
kind: EdgePlacement
metadata:
  name: edge-placement-s
spec:
  locationSelectors:
  - matchLabels: {"env":"prod","extended":"yes"}
  downsync:
  - apiGroup: ""
    resources: [ configmaps ]
    namespaceSelectors:
    - matchLabels: {"special":"yes"}
  - apiGroup: apps
    resources: [ deployments ]
    namespaceSelectors:
    - matchLabels: {"special":"yes"}
    objectNames: [ speciald ]
  - apiGroup: apiregistration.k8s.io
    resources: [ apiservices ]
    objectNames: [ v1090.example.my ]
  - apiGroup: stable.example.com
    resources: [ crontabs ]
    namespaces: [ specialstuff ]
    objectNames: [ my-new-cron-object ]
  - apiGroup: ""
    resources: [ namespaces ]
    objectNames: [ specialstuff ]
  wantSingletonReportedState: true
  upsync:
  - apiGroup: "group1.test"
    resources: ["sprockets", "flanges"]
    namespaces: ["orbital"]
    names: ["george", "cosmo"]
  - apiGroup: "group3.test"
    resources: ["widgets"]
    names: ["*"]
EOF
sleep 10

Where Resolver#

In response to each EdgePlacement, the Where Resolver will create a corresponding SinglePlacementSlice object. These will indicate the following resolutions of the "where" predicates.

EdgePlacement Resolved Where
edge-placement-c florin, guilder
edge-placement-s guilder

If you have deployed the KubeStellar core in a Kubernetes cluster then the where resolver is running in a pod there. If instead you are running the core controllers are bare processes then you can use the following commands to launch the where-resolver; it requires the ESPW to be the current kcp workspace at start time.

kubectl ws root:espw
kubestellar-where-resolver &
sleep 10

The following commands wait until the where-resolver has done its job for the common and special EdgePlacement objects.

kubectl ws root:wmw-c
while ! kubectl get SinglePlacementSlice &> /dev/null; do
  sleep 10
done
kubectl ws root:wmw-s
while ! kubectl get SinglePlacementSlice &> /dev/null; do
  sleep 10
done

If things are working properly then you will see log lines like the following (among many others) in the where-resolver's log.

I0423 01:33:37.036752   11305 main.go:212] "Found APIExport view" exportName="edge.kubestellar.io" serverURL="https://192.168.58.123:6443/services/apiexport/7qkse309upzrv0fy/edge.kubestellar.io"
...
I0423 01:33:37.320859   11305 reconcile_on_location.go:192] "updated SinglePlacementSlice" controller="kubestellar-where-resolver" triggeringKind=Location key="apmziqj9p9fqlflm|florin" locationWorkspace="apmziqj9p9fqlflm" location="florin" workloadWorkspace="10l175x6ejfjag3e" singlePlacementSlice="edge-placement-c"
...
I0423 01:33:37.391772   11305 reconcile_on_location.go:192] "updated SinglePlacementSlice" controller="kubestellar-where-resolver" triggeringKind=Location key="apmziqj9p9fqlflm|guilder" locationWorkspace="apmziqj9p9fqlflm" location="guilder" workloadWorkspace="10l175x6ejfjag3e" singlePlacementSlice="edge-placement-c"

Check out a SinglePlacementSlice object as follows.

kubectl ws root:wmw-c
Current workspace is "root:wmw-c".

kubectl get SinglePlacementSlice -o yaml
apiVersion: v1
items:
- apiVersion: edge.kubestellar.io/v2alpha1
  destinations:
  - cluster: apmziqj9p9fqlflm
    locationName: florin
    syncTargetName: florin
    syncTargetUID: b8c64c64-070c-435b-b3bd-9c0f0c040a54
  - cluster: apmziqj9p9fqlflm
    locationName: guilder
    syncTargetName: guilder
    syncTargetUID: bf452e1f-45a0-4d5d-b35c-ef1ece2879ba
  kind: SinglePlacementSlice
  metadata:
    annotations:
      kcp.io/cluster: 10l175x6ejfjag3e
    creationTimestamp: "2023-04-23T05:33:37Z"
    generation: 4
    name: edge-placement-c
    ownerReferences:
    - apiVersion: edge.kubestellar.io/v2alpha1
      kind: EdgePlacement
      name: edge-placement-c
      uid: 199cfe1e-48d9-4351-af5c-e66c83bf50dd
    resourceVersion: "1316"
    uid: b5db1f9d-1aed-4a25-91da-26dfbb5d8879
kind: List
metadata:
  resourceVersion: ""

Also check out the SinglePlacementSlice objects in root:wmw-s. It should go similarly, but the destinations should include only the entry for guilder.

Finally run the placement translator from the command line. That should look like the following (possibly including some complaints, which do not necessarily indicate real problems).

kubectl ws root:espw
placement-translator &
sleep 120
I0412 15:15:57.867837   94634 shared_informer.go:282] Waiting for caches to sync for placement-translator
I0412 15:15:57.969533   94634 shared_informer.go:289] Caches are synced for placement-translator
I0412 15:15:57.970003   94634 shared_informer.go:282] Waiting for caches to sync for what-resolver
I0412 15:15:57.970014   94634 shared_informer.go:289] Caches are synced for what-resolver
I0412 15:15:57.970178   94634 shared_informer.go:282] Waiting for caches to sync for where-resolver
I0412 15:15:57.970192   94634 shared_informer.go:289] Caches are synced for where-resolver
...
I0412 15:15:57.972185   94634 map-types.go:338] "Put" map="where" key="r0bdh9oumjkoag3s:edge-placement-s" val="[&{SinglePlacementSlice edge.kubestellar.io/v2alpha1} {edge-placement-s    e1b1033d-49f2-45e8-8a90-6d0295b644b6 1184 1 2023-04-12 14:39:21 -0400 EDT <nil> <nil> map[] map[kcp.io/cluster:r0bdh9oumjkoag3s] [{edge.kubestellar.io/v2alpha1 EdgePlacement edge-placement-s 0e718a31-db21-47f1-b789-cd55835b1418 <nil> <nil>}] []  [{where-resolver Update edge.kubestellar.io/v2alpha1 2023-04-12 14:39:21 -0400 EDT FieldsV1 {\"f:destinations\":{},\"f:metadata\":{\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"0e718a31-db21-47f1-b789-cd55835b1418\\\"}\":{}}}} }]} [{1xpg93182scl85te location-g sync-target-g 5ee1c42e-a7d5-4363-ba10-2f13fe578e19}]}]"
I0412 15:15:57.973740   94634 map-types.go:338] "Put" map="where" key="1i1weo8uoea04wxr:edge-placement-c" val="[&{SinglePlacementSlice edge.kubestellar.io/v2alpha1} {edge-placement-c    c446ca9b-8937-4751-89ab-058bcfb079c1 1183 3 2023-04-12 14:39:21 -0400 EDT <nil> <nil> map[] map[kcp.io/cluster:1i1weo8uoea04wxr] [{edge.kubestellar.io/v2alpha1 EdgePlacement edge-placement-c c1e038b9-8bd8-4d22-8ab8-916e40c794d1 <nil> <nil>}] []  [{where-resolver Update edge.kubestellar.io/v2alpha1 2023-04-12 14:39:21 -0400 EDT FieldsV1 {\"f:destinations\":{},\"f:metadata\":{\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"c1e038b9-8bd8-4d22-8ab8-916e40c794d1\\\"}\":{}}}} }]} [{1xpg93182scl85te location-f sync-target-f e6efb8bd-6755-45ac-b44d-5d38f978f990} {1xpg93182scl85te location-g sync-target-g 5ee1c42e-a7d5-4363-ba10-2f13fe578e19}]}]"
...
I0412 15:15:58.173974   94634 map-types.go:338] "Put" map="what" key="1i1weo8uoea04wxr:edge-placement-c" val={Downsync:map[{APIGroup: Resource:namespaces Name:commonstuff}:{APIVersion:v1 IncludeNamespaceObject:false}] Upsync:[{APIGroup:group1.test Resources:[sprockets flanges] Namespaces:[orbital] Names:[george cosmo]} {APIGroup:group2.test Resources:[cogs] Namespaces:[] Names:[William]}]}
I0412 15:15:58.180380   94634 map-types.go:338] "Put" map="what" key="r0bdh9oumjkoag3s:edge-placement-s" val={Downsync:map[{APIGroup: Resource:namespaces Name:specialstuff}:{APIVersion:v1 IncludeNamespaceObject:false}] Upsync:[{APIGroup:group1.test Resources:[sprockets flanges] Namespaces:[orbital] Names:[george cosmo]} {APIGroup:group3.test Resources:[widgets] Namespaces:[] Names:[*]}]}
...

The "Put" log entries with map="what" show what the "what resolver" is reporting. This reports mappings from ExternalName of an EdgePlacement object to the workload parts that that EdgePlacement says to downsync and upsync.

The "Put" log entries with map="where" show the SinglePlacementSlice objects associated with each EdgePlacement.

Next, using a separate shell, examine the SyncerConfig objects in the mailbox workspaces. Make sure to use the same kubeconfig as you use to run the placement translator, or any other that is pointed at the edge service provider workspace. The following with switch the focus to mailbox workspace(s).

You can get a listing of mailbox workspaces, as follows.

kubectl ws root
kubectl get workspace
NAME                                                       TYPE        REGION   PHASE   URL                                                     AGE
1xpg93182scl85te-mb-5ee1c42e-a7d5-4363-ba10-2f13fe578e19   universal            Ready   https://192.168.58.123:6443/clusters/12zzf3frkqz2yj39   36m
1xpg93182scl85te-mb-e6efb8bd-6755-45ac-b44d-5d38f978f990   universal            Ready   https://192.168.58.123:6443/clusters/2v6wl3x41zxmpmhr   36m

Next switch to one of the mailbox workspaces (in my case I picked the one for the guilder cluster) and examine the SyncerConfig object. That should look like the following.

kubectl ws $(kubectl get Workspace -o json | jq -r '.items | .[] | .metadata | select(.annotations ["edge.kubestellar.io/sync-target-name"] == "guilder") | .name')
Current workspace is "root:1xpg93182scl85te-mb-5ee1c42e-a7d5-4363-ba10-2f13fe578e19" (type root:universal).

kubectl get SyncerConfig the-one -o yaml                           
apiVersion: edge.kubestellar.io/v2alpha1
kind: SyncerConfig
metadata:
  annotations:
    kcp.io/cluster: 12zzf3frkqz2yj39
  creationTimestamp: "2023-04-12T19:15:58Z"
  generation: 2
  name: the-one
  resourceVersion: "1249"
  uid: 00bf8d10-393a-4d94-b032-79fae30646f6
spec:
  namespaceScope:
    namespaces:
    - commonstuff
    - specialstuff
    resources:
    - apiVersion: v1
      group: ""
      resource: limitranges
    - apiVersion: v1
      group: coordination.k8s.io
      resource: leases
    - apiVersion: v1
      group: ""
      resource: resourcequotas
    - apiVersion: v1
      group: ""
      resource: configmaps
    - apiVersion: v1
      group: networking.k8s.io
      resource: ingresses
    - apiVersion: v1
      group: events.k8s.io
      resource: events
    - apiVersion: v1
      group: apps
      resource: deployments
    - apiVersion: v1
      group: ""
      resource: events
    - apiVersion: v1
      group: ""
      resource: secrets
    - apiVersion: v1
      group: ""
      resource: services
    - apiVersion: v1
      group: ""
      resource: pods
    - apiVersion: v1
      group: ""
      resource: serviceaccounts
    - apiVersion: v1
      group: rbac.authorization.k8s.io
      resource: rolebindings
    - apiVersion: v1
      group: rbac.authorization.k8s.io
      resource: roles
  upsync:
  - apiGroup: group2.test
    names:
    - William
    resources:
    - cogs
  - apiGroup: group3.test
    names:
    - '*'
    resources:
    - widgets
  - apiGroup: group1.test
    names:
    - george
    - cosmo
    namespaces:
    - orbital
    resources:
    - sprockets
    - flanges
status: {}

At this point you might veer off from the example scenario and try tweaking things. For example, try deleting an EdgePlacement as follows.

kubectl ws root:wmw-c
Current workspace is "root:work-c"
kubectl delete EdgePlacement edge-placement-c
edgeplacement.edge.kubestellar.io "edge-placement-c" deleted

That will cause the placement translator to log updates, as follows.

I0412 15:20:43.129842   94634 map-types.go:338] "Put" map="what" key="1i1weo8uoea04wxr:edge-placement-c" val={Downsync:map[] Upsync:[]}
I0412 15:20:43.241674   94634 map-types.go:342] "Delete" map="where" key="1i1weo8uoea04wxr:edge-placement-c"

After that, the SyncerConfig in the florin mailbox should be empty, as in the following (you mailbox workspace names may be different).

kubectl ws root
Current workspace is "root".

kubectl ws $(kubectl get Workspace -o json | jq -r '.items | .[] | .metadata | select(.annotations ["edge.kubestellar.io/sync-target-name"] == "florin") | .name')
Current workspace is "root:2lplrryirmv4xug3-mb-89c08764-01ae-4117-8fb0-6b752e76bc2f" (type root:universal).

kubectl get SyncerConfig the-one -o yaml
apiVersion: edge.kubestellar.io/v2alpha1
kind: SyncerConfig
metadata:
  annotations:
    kcp.io/cluster: 2cow9p3xogak4n0u
  creationTimestamp: "2023-04-11T04:34:22Z"
  generation: 4
  name: the-one
  resourceVersion: "2130"
  uid: 2b66b4bc-4130-4bf0-8524-73d6885f2ad8
spec:
  namespaceScope: {}
status: {}

And the SyncerConfig in the guilder mailbox workspace should reflect only the special workload. That would look something like the following.

kubectl ws root
kubectl ws $(kubectl get Workspace -o json | jq -r '.items | .[] | .metadata | select(.annotations ["edge.kubestellar.io/sync-target-name"] == "guilder") | .name')
Current workspace is "root:1xpg93182scl85te-mb-5ee1c42e-a7d5-4363-ba10-2f13fe578e19" (type root:universal).

kubectl get SyncerConfig the-one -o yaml                           
apiVersion: edge.kubestellar.io/v2alpha1
kind: SyncerConfig
metadata:
  annotations:
    kcp.io/cluster: 12zzf3frkqz2yj39
  creationTimestamp: "2023-04-12T19:15:58Z"
  generation: 3
  name: the-one
  resourceVersion: "1254"
  uid: 00bf8d10-393a-4d94-b032-79fae30646f6
spec:
  namespaceScope:
    namespaces:
    - specialstuff
    resources:
    - apiVersion: v1
      group: ""
      resource: pods
    - apiVersion: v1
      group: ""
      resource: events
    - apiVersion: v1
      group: ""
      resource: limitranges
    - apiVersion: v1
      group: ""
      resource: services
    - apiVersion: v1
      group: ""
      resource: configmaps
    - apiVersion: v1
      group: apps
      resource: deployments
    - apiVersion: v1
      group: ""
      resource: serviceaccounts
    - apiVersion: v1
      group: ""
      resource: secrets
    - apiVersion: v1
      group: rbac.authorization.k8s.io
      resource: roles
    - apiVersion: v1
      group: ""
      resource: resourcequotas
    - apiVersion: v1
      group: events.k8s.io
      resource: events
    - apiVersion: v1
      group: networking.k8s.io
      resource: ingresses
    - apiVersion: v1
      group: coordination.k8s.io
      resource: leases
    - apiVersion: v1
      group: rbac.authorization.k8s.io
      resource: rolebindings
  upsync:
  - apiGroup: group3.test
    names:
    - '*'
    resources:
    - widgets
  - apiGroup: group1.test
    names:
    - george
    - cosmo
    namespaces:
    - orbital
    resources:
    - sprockets
    - flanges
status: {}

Teardown the environment#

To remove the example usage, delete the IMW and WMW and kind clusters run the following commands:

rm florin-syncer.yaml guilder-syncer.yaml || true
kubectl ws root
kubectl delete workspace example-imw
kubectl kubestellar remove wmw example-wmw
kind delete cluster --name florin
kind delete cluster --name guilder

Teardown of KubeStellar depends on which style of deployment was used.

Teardown bare processes#

The following command will stop whatever KubeStellar controllers are running.

kubestellar stop

Stop and uninstall KubeStellar and kcp with the following command:

remove-kubestellar

Teardown Kubernetes workload#

With kubectl configured to manipulate the hosting cluster, the following command will remove the workload that is kcp and KubeStellar.

helm delete kubestellar