Skip to content

KubeStellar Mailbox Controller

docs-ecutable - mailbox-controller   

Required Packages for running and using KubeStellar:

You will need the following tools to deploy and use KubeStellar. Select the tab for your environment for suggested commands to install them

  • curl (omitted from most OS-specific instructions)

  • jq

  • yq

  • kubectl (version range expected: 1.23-1.25)

  • helm (required when deploying as workload)

If you intend to build kubestellar from source you will also need

  • go (Go version >=1.19 required; 1.19 recommended) [go releases] (https://go.dev/dl)

  • for simplicity, here's a direct link to go releases Remember you need go v1.19 or greater; 1.19 recommended!

jq - https://stedolan.github.io/jq/download/
brew install jq
yq - https://github.com/mikefarah/yq#install
brew install yq
kubectl - https://kubernetes.io/docs/tasks/tools/ (version range expected: 1.23-1.25)
brew install kubectl
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
brew install helm
go (only required if you build kubestellar from source)

  1. Download the package from https://go.dev/dl#go1.19 Be sure to get the correct one for your architecture

  2. Open the package file you downloaded and follow the prompts to install Go. The package installs the Go distribution to /usr/local/go. The package should put the /usr/local/go/bin directory in your PATH environment variable. You may need to restart any open Terminal sessions for the change to take effect.

  3. Verify that you've installed Go by opening a command prompt and typing the following command: $ go version Confirm that the command prints the desired installed version of Go.

jq - https://stedolan.github.io/jq/download/
sudo apt-get install jq
yq - https://github.com/mikefarah/yq#install
sudo snap install yq
kubectl - https://kubernetes.io/docs/tasks/tools/ (version range expected: 1.23-1.25)
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/$(dpkg --print-architecture)/kubectl" && chmod +x kubectl && sudo mv ./kubectl /usr/local/bin/kubectl
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
go (only required if you build kubestellar from source)

visit https://go.dev/doc/install for latest instructions

  1. Remove any previous Go installation by deleting the /usr/local/go folder (if it exists), then extract the archive you just downloaded into /usr/local, creating a fresh Go tree in /usr/local/go:

    $ rm -rf /usr/local/go && tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz

    (You may need to run the command as root or through sudo).

    Do not untar the archive into an existing /usr/local/go tree. This is known to produce broken Go installations.

  2. Add /usr/local/go/bin to the PATH environment variable. You can do this by adding the following line to your $HOME/.profile or /etc/profile (for a system-wide installation):

    export PATH=$PATH:/usr/local/go/bin

    Note: Changes made to a profile file may not apply until the next time you log into your computer. To apply the changes immediately, just run the shell commands directly or execute them from the profile using a command such as source $HOME/.profile.

  3. Verify that you've installed Go by opening a command prompt and typing the following command:

    $ go version

  4. Confirm that the command prints the installed version of Go.

jq - https://stedolan.github.io/jq/download/
sudo apt-get install jq
yq - https://github.com/mikefarah/yq#install
sudo apt-get install yq
kubectl - https://kubernetes.io/docs/tasks/tools/ (version range expected: 1.23-1.25)
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/$(dpkg --print-architecture)/kubectl" && chmod +x kubectl && sudo mv ./kubectl /usr/local/bin/kubectl
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
go (only required if you build kubestellar from source)

visit https://go.dev/doc/install for latest instructions

  1. Remove any previous Go installation by deleting the /usr/local/go folder (if it exists), then extract the archive you just downloaded into /usr/local, creating a fresh Go tree in /usr/local/go:

    $ rm -rf /usr/local/go && tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz

    (You may need to run the command as root or through sudo).

    Do not untar the archive into an existing /usr/local/go tree. This is known to produce broken Go installations.

  2. Add /usr/local/go/bin to the PATH environment variable. You can do this by adding the following line to your $HOME/.profile or /etc/profile (for a system-wide installation):

    export PATH=$PATH:/usr/local/go/bin

    Note: Changes made to a profile file may not apply until the next time you log into your computer. To apply the changes immediately, just run the shell commands directly or execute them from the profile using a command such as source $HOME/.profile.

  3. Verify that you've installed Go by opening a command prompt and typing the following command:

    $ go version

  4. Confirm that the command prints the installed version of Go.

jq - https://stedolan.github.io/jq/download/
yum -y install jq
yq - https://github.com/mikefarah/yq#install
# easiest to install with snap
snap install yq
kubectl - https://kubernetes.io/docs/tasks/tools/ (version range expected: 1.23-1.25)
# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" && chmod +x kubectl && mv ./kubectl /usr/local/bin/kubectl
# for ARM64 / aarch64
[ $(uname -m) = aarch64 ] && curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl" && chmod +x kubectl && mv ./kubectl /usr/local/bin/kubectl
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
dnf install helm
go (only required if you build kubestellar from source)

visit https://go.dev/doc/install for latest instructions

  1. Remove any previous Go installation by deleting the /usr/local/go folder (if it exists), then extract the archive you just downloaded into /usr/local, creating a fresh Go tree in /usr/local/go:

    $ rm -rf /usr/local/go && tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz

    (You may need to run the command as root or through sudo).

    Do not untar the archive into an existing /usr/local/go tree. This is known to produce broken Go installations.

  2. Add /usr/local/go/bin to the PATH environment variable. You can do this by adding the following line to your $HOME/.profile or /etc/profile (for a system-wide installation):

    export PATH=$PATH:/usr/local/go/bin

    Note: Changes made to a profile file may not apply until the next time you log into your computer. To apply the changes immediately, just run the shell commands directly or execute them from the profile using a command such as source $HOME/.profile.

  3. Verify that you've installed Go by opening a command prompt and typing the following command:

    $ go version

  4. Confirm that the command prints the installed version of Go.

Chocolatey - https://chocolatey.org/install#individual
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
curl
choco install curl -y
jq - https://stedolan.github.io/jq/download/
choco install jq -y
yq - https://github.com/mikefarah/yq#install
choco install yq -y
kubectl - https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/ (version range expected: 1.23-1.25)
curl.exe -LO "https://dl.k8s.io/release/v1.27.2/bin/windows/amd64/kubectl.exe"    
helm (required when deploying as workload) - https://helm.sh/docs/intro/install/
choco install kubernetes-helm
go (only required if you build kubestellar from source)
visit https://go.dev/doc/install for latest instructions

  1. Download the go 1.19 MSI package from https://go.dev/dl#go1.19 Be sure to get the correct one for your architecture

  2. Open the MSI file you downloaded and follow the prompts to install Go.

    By default, the installer will install Go to Program Files or Program Files (x86). You can change the location as needed. After installing, you will need to close and reopen any open command prompts so that changes to the environment made by the installer are reflected at the command prompt.

  3. Verify that you've installed Go:

    1. In Windows, click the Start menu.

    2. In the menu's search box, type cmd, then press the Enter key.

    3. In the Command Prompt window that appears, type the following command: $ go version

    4. Confirm that the command prints the installed version of Go.

How to install pre-requisites for a Windows Subsystem for Linux (WSL) envronment using an Ubuntu 22.04.01 distribution

(Tested on a Intel(R) Core(TM) i7-9850H CPU @ 2.60GHz 2.59 GHz with 32GB RAM, a 64-bit operating system, x64-based processor Using Windows 11 Enterprise)

1. If you're using a VPN, turn it off

2. Install Ubuntu into WSL

2.0 If wsl is not yet installed, open a powershell administrator window and run the following
wsl --install
2.1 reboot your system

2.2 In a Windows command terminal run the following to list all the linux distributions that are available online
wsl -l -o
2.3 Select a linux distribution and install it into WSL
wsl --install -d Ubuntu 22.04.01
You will see something like:
Installing, this may take a few minutes...
Please create a default UNIX user account. The username does not need to match your Windows username.
For more information visit: https://aka.ms/wslusers
Enter new UNIX username:

2.4 Enter your new username and password at the prompts, and you will eventually see something like:
Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.10.102.1-microsoft-standard-WSL2 x86_64)

2.5 Click on the Windows "Start" icon and type in the name of your distribution into the search box. Your new linux distribution should appear as a local "App". You can pin it to the Windows task bar or to Start for your future convenience.
Start a VM using your distribution by clicking on the App.

3. Install pre-requisites into your new VM
3.1 update and apply apt-get packages
sudo apt-get update
sudo apt-get upgrade

3.2 Install golang
wget https://golang.org/dl/go1.19.linux-amd64.tar.gz
sudo tar -zxvf go1.19.linux-amd64.tar.gz -C /usr/local
echo export GOROOT=/usr/local/go | sudo tee -a /etc/profile
echo export PATH="$PATH:/usr/local/go/bin" | sudo tee -a /etc/profile
source /etc/profile
go version

3.3 Install ko (but don't do ko set action step)
go install github.com/google/ko@latest

3.4 Install gcc
Either run this:
sudo apt install build-essential
or this:
sudo apt-get update
apt install gcc
gcc --version

3.5 Install make (if you installed build-essential this may already be installed)
apt install make

3.6 Install jq
DEBIAN_FRONTEND=noninteractive apt-get install -y jq
jq --version

3.7 install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256)  kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
3.8 install helm (required when deploying as workload)
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

Required Packages for the example usage:

You will need the following tools for the example usage of KubeStellar in this quickstart example. Select the tab for your environment for suggested commands to install them

docker - https://docs.docker.com/engine/install/
brew install docker
open -a Docker
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
brew install kind

docker - https://docs.docker.com/engine/install/
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
Enable rootless usage of Docker (requires relogin) - https://docs.docker.com/engine/security/rootless/
sudo apt-get install -y dbus-user-session # *** Relogin after this
sudo apt-get install -y uidmap
dockerd-rootless-setuptool.sh install
systemctl --user restart docker.service
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-$(dpkg --print-architecture) && chmod +x ./kind && sudo mv ./kind /usr/local/bin

docker - https://docs.docker.com/engine/install/
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

# Add the repository to Apt sources:
echo \
  "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

# Install packages
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Enable rootless usage of Docker (requires relogin) - https://docs.docker.com/engine/security/rootless/
sudo apt-get install -y dbus-user-session # *** Relogin after this
sudo apt-get install -y fuse-overlayfs
sudo apt-get install -y slirp4netns
dockerd-rootless-setuptool.sh install
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-linux-$(dpkg --print-architecture) && chmod +x ./kind && sudo mv ./kind /usr/local/bin

docker - https://docs.docker.com/engine/install/
yum -y install epel-release && yum -y install docker && systemctl enable --now docker && systemctl status docker
Enable rootless usage of Docker by following the instructions at https://docs.docker.com/engine/security/rootless/
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-linux-amd64
# For ARM64
[ $(uname -m) = aarch64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-linux-arm64 
chmod +x ./kind && sudo mv ./kind /usr/local/bin/kind

docker - https://docs.docker.com/engine/install/
choco install docker -y
kind - https://kind.sigs.k8s.io/docs/user/quick-start/
curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.14.0/kind-windows-amd64

How to install docker and kind into a Windows Subsystem for Linux (WSL) environment using an Ubuntu 22.04.01 distribution

1.0 Start a VM terminal by clicking on the App you configured using the instructions in the General pre-requisites described above.

2.0 Install docker
The installation instructions from docker are not sufficient to get docker working with WSL

2.1 Follow instructions here to install docker https://docs.docker.com/engine/install/ubuntu/

Here some additional steps you will need to take:

2.2 Ensure that /etc/wsl.conf is configured so that systemd will run on booting.
If /etc/wsl.conf does not contain [boot] systemd=true, then edit /etc/wsl.com as follows:
sudo vi /etc/wsl.conf
Insert
[boot]
systemd=true

2.3 Edit /etc/sudoers: it is strongly recommended to not add directives directly to /etc/sudoers, but instead to put them in files in /etc/sudoers.d which are auto-included. So make/modify a new file via
sudo vi /etc/sudoers.d/docker
Insert
# Docker daemon specification
<your user account> ALL=(ALL) NOPASSWD: /usr/bin/dockerd

2.4 Add your user to the docker group
sudo usermod -aG docker $USER

2.5 If dockerd is already running, then stop it and restart it as follows (note: the new dockerd instance will be running in the foreground):
sudo systemctl stop docker
sudo dockerd &

2.5.1 If you encounter an iptables issue, which is described here: https://github.com/microsoft/WSL/issues/6655 The following commands will fix the issue:
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
sudo dockerd & 

3. You will now need to open new terminals to access the VM since dockerd is running in the foreground of this terminal

3.1 In your new terminal, install kind
wget -nv https://github.com/kubernetes-sigs/kind/releases/download/v0.17.0/kind-linux-$(dpkg --print-architecture) -O kind 
sudo install -m 0755 kind /usr/local/bin/kind 
rm kind 
kind version

This document is 'docs-ecutable' - you can 'run' this document, just like we do in our testing, on your local environment

git clone -b release-0.14 https://github.com/kubestellar/kubestellar
cd kubestellar
make MANIFEST="'docs/content/common-subs/pre-req.md','docs/content/Coding Milestones/PoC2023q1/mailbox-controller.md'" docs-ecutable
# done? remove everything
make MANIFEST="docs/content/common-subs/remove-all.md" docs-ecutable
cd ..
rm -rf kubestellar

Linking SyncTarget with Mailbox Workspace#

For a given SyncTarget T, the mailbox controller currently chooses the name of the corresponding workspace to be the concatenation of the following:

  • the ID of the logical cluster containing T
  • the string "-mb-"
  • T's UID

The mailbox workspace gets an annotation whose key is edge.kubestellar.io/sync-target-name and whose value is the name of the workspace object (as seen in its parent workspace, the edge service provider workspace).

Usage#

The mailbox controller needs three Kubernetes client configurations. One --- concerned with reading inventory --- is to access the APIExport view of the edge.kubestellar.io API group, to read the SyncTarget objects. This must be a client config that is pointed at the workspace (which is always root:espw, as far as I know) that has this APIExport and is authorized to read its view. Another client config is needed to give read/write access to all the mailbox workspaces, so that the controller can create APIBinding objects to the edge APIExport in those workspaces; this should be a client config that is able to read/write in all clusters. For example, that is in the kubeconfig context named base in the kubeconfig created by kcp start. Finally, the controller also needs a kube client config that is pointed at the root workspace and is authorized to consume the Workspace objects from there.

The command line flags, beyond the basics, are as follows.

      --concurrency int                  number of syncs to run in parallel (default 4)
      --espw-path string                 the pathname of the edge service provider workspace (default "root:espw")

      --mbws-cluster string              The name of the kubeconfig cluster to use for access to mailbox workspaces (really all clusters)
      --mbws-context string              The name of the kubeconfig context to use for access to mailbox workspaces (really all clusters) (default "base")
      --mbws-kubeconfig string           Path to the kubeconfig file to use for access to mailbox workspaces (really all clusters)
      --mbws-user string                 The name of the kubeconfig user to use for access to mailbox workspaces (really all clusters)

      --server-bind-address ipport       The IP address with port at which to serve /metrics and /debug/pprof/ (default :10203)

      --root-cluster string              The name of the kubeconfig cluster to use for access to the root workspace
      --root-context string              The name of the kubeconfig context to use for access to the root workspace (default "root")
      --root-kubeconfig string           Path to the kubeconfig file to use for access to the root workspace
      --root-user string                 The name of the kubeconfig user to use for access to the root workspace

Try out the mailbox controller#

Pull the kcp and KubeStellar source code, build the kubectl-ws binary, and start kcp#

Open a terminal window(1) and clone the latest KubeStellar source:

Clone the v0.11.0 branch kcp source:

git clone -b v0.11.0 https://github.com/kcp-dev/kcp kcp
Build the kubectl-ws binary and include it in $PATH
pushd kcp
make build
export PATH=$(pwd)/bin:$PATH

Run kcp (kcp will spit out tons of information and stay running in this terminal window). Set your KUBECONFIG environment variable to name the kubernetes client config file that kcp generates.

kcp start &> /dev/null &
export KUBECONFIG=$(pwd)/.kcp/admin.kubeconfig
popd
sleep 30

Create the Edge Service Provider Workspace (ESPW)#

Open another terminal window(2) and point $KUBECONFIG to the admin kubeconfig for the kcp server and include the location of kubectl-ws in $PATH.

make build
export PATH=$(pwd)/bin:$PATH

Next, use the command that makes sure the Edge Service Provider Workspace (ESPW), which is root:espw, and the TMC provider workspace (root:compute) are properly set up.

kubestellar init

After that, a run of the controller should look like the following.

kubectl ws root:espw
mailbox-controller -v=2 &
sleep 45
I0305 18:06:20.046741   85556 main.go:110] "Command line flag" add_dir_header="false"
I0305 18:06:20.046954   85556 main.go:110] "Command line flag" alsologtostderr="false"
I0305 18:06:20.046960   85556 main.go:110] "Command line flag" concurrency="4"
I0305 18:06:20.046965   85556 main.go:110] "Command line flag" inventory-context="root"
I0305 18:06:20.046971   85556 main.go:110] "Command line flag" inventory-kubeconfig=""
I0305 18:06:20.046976   85556 main.go:110] "Command line flag" log_backtrace_at=":0"
I0305 18:06:20.046980   85556 main.go:110] "Command line flag" log_dir=""
I0305 18:06:20.046985   85556 main.go:110] "Command line flag" log_file=""
I0305 18:06:20.046989   85556 main.go:110] "Command line flag" log_file_max_size="1800"
I0305 18:06:20.046993   85556 main.go:110] "Command line flag" logtostderr="true"
I0305 18:06:20.046997   85556 main.go:110] "Command line flag" one_output="false"
I0305 18:06:20.047002   85556 main.go:110] "Command line flag" server-bind-address=":10203"
I0305 18:06:20.047006   85556 main.go:110] "Command line flag" skip_headers="false"
I0305 18:06:20.047011   85556 main.go:110] "Command line flag" skip_log_headers="false"
I0305 18:06:20.047015   85556 main.go:110] "Command line flag" stderrthreshold="2"
I0305 18:06:20.047019   85556 main.go:110] "Command line flag" v="2"
I0305 18:06:20.047023   85556 main.go:110] "Command line flag" vmodule=""
I0305 18:06:20.047027   85556 main.go:110] "Command line flag" workload-context=""
I0305 18:06:20.047031   85556 main.go:110] "Command line flag" workload-kubeconfig=""
I0305 18:06:20.070071   85556 main.go:247] "Found APIExport view" exportName="workload.kcp.io" serverURL="https://192.168.58.123:6443/services/apiexport/root/workload.kcp.io"
I0305 18:06:20.072088   85556 shared_informer.go:282] Waiting for caches to sync for mailbox-controller
I0305 18:06:20.172169   85556 shared_informer.go:289] Caches are synced for mailbox-controller
I0305 18:06:20.172196   85556 main.go:210] "Informers synced"

In a separate terminal window(3), create an inventory management workspace as follows.

kubectl ws \~
kubectl ws create imw --enter
kubectl kcp bind apiexport root:espw:edge.kubestellar.io

Then in that workspace, run the following command to create a SyncTarget object.

cat <<EOF | kubectl apply -f -
apiVersion: edge.kubestellar.io/v2alpha1
kind: SyncTarget
metadata:
  name: stest1
spec:
  cells:
    foo: bar
EOF

That should provoke logging like the following from the mailbox controller.

I0305 18:07:20.490417   85556 main.go:369] "Created missing workspace" worker=0 mbwsName="niqdko2g2pwoadfb-mb-f99e773f-3db2-439e-8054-827c4ac55368"

And you can verify that as follows:

kubectl ws .
kubectl get synctargets.edge.kubestellar.io

kubectl ws root
Current workspace is "root".

kubectl ws tree 
kubectl get workspaces
NAME                                                       TYPE        REGION   PHASE   URL                                                     AGE
niqdko2g2pwoadfb-mb-f99e773f-3db2-439e-8054-827c4ac55368   universal            Ready   https://192.168.58.123:6443/clusters/0ay27fcwuo2sv6ht   22s

FYI, if you look inside that workspace you will see an APIBinding named bind-edge that binds to the APIExport named edge.kubestellar.io from the edge service provider workspace (and this is why the controller needs to know the pathname of that workspace), so that the edge API is available in the mailbox workspace.

Next, kubectl delete that workspace, and watch the mailbox controller wait for it to be gone and then re-create it.

I0305 18:08:15.428884   85556 main.go:369] "Created missing workspace" worker=2 mbwsName="niqdko2g2pwoadfb-mb-f99e773f-3db2-439e-8054-827c4ac55368"

Finally, go back to your inventory workspace to delete the SyncTarget:

kubectl ws \~
kubectl ws .
kubectl ws imw
kubectl ws .
kubectl get synctargets.edge.kubestellar.io
kubectl delete synctargets.edge.kubestellar.io stest1
and watch the mailbox controller react as follows.

I0305 18:08:44.380421   85556 main.go:352] "Deleted unwanted workspace" worker=0 mbwsName="niqdko2g2pwoadfb-mb-f99e773f-3db2-439e-8054-827c4ac55368"

Teardown the environment#

To remove the example usage, delete the IMW and WMW and kind clusters run the following commands:

rm florin-syncer.yaml guilder-syncer.yaml || true
kubectl ws root
kubectl delete workspace example-imw
kubectl kubestellar remove wmw example-wmw
kind delete cluster --name florin
kind delete cluster --name guilder

Teardown of KubeStellar depends on which style of deployment was used.

Teardown bare processes#

The following command will stop whatever KubeStellar controllers are running.

kubestellar stop

Stop and uninstall KubeStellar and kcp with the following command:

remove-kubestellar

Teardown Kubernetes workload#

With kubectl configured to manipulate the hosting cluster, the following command will remove the workload that is kcp and KubeStellar.

helm delete kubestellar