Extended Example
Required Packages for running and using KubeStellar:
You will need the following tools to deploy and use KubeStellar. Select the tab for your environment for suggested commands to install them
-
curl (omitted from most OS-specific instructions)
-
kubectl (version range expected: 1.23-1.25)
-
helm (required when deploying as workload)
If you intend to build kubestellar from source you will also need
-
go (Go version >=1.19 required; 1.19 recommended) [go releases] (https://go.dev/dl)
-
for simplicity, here's a direct link to go releases Remember you need go v1.19 or greater; 1.19 recommended!
brew install kubectl
-
Download the package from https://go.dev/dl#go1.19 Be sure to get the correct one for your architecture
-
Open the package file you downloaded and follow the prompts to install Go. The package installs the Go distribution to /usr/local/go. The package should put the /usr/local/go/bin directory in your PATH environment variable. You may need to restart any open Terminal sessions for the change to take effect.
-
Verify that you've installed Go by opening a command prompt and typing the following command:
$ go version
Confirm that the command prints the desired installed version of Go.
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/$(dpkg --print-architecture)/kubectl" && chmod +x kubectl && sudo mv ./kubectl /usr/local/bin/kubectl
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
visit https://go.dev/doc/install for latest instructions
-
Remove any previous Go installation by deleting the /usr/local/go folder (if it exists), then extract the archive you just downloaded into /usr/local, creating a fresh Go tree in /usr/local/go:
$ rm -rf /usr/local/go && tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz
(You may need to run the command as root or through sudo).
Do not untar the archive into an existing /usr/local/go tree. This is known to produce broken Go installations.
-
Add /usr/local/go/bin to the PATH environment variable. You can do this by adding the following line to your $HOME/.profile or /etc/profile (for a system-wide installation):
export PATH=$PATH:/usr/local/go/bin
Note: Changes made to a profile file may not apply until the next time you log into your computer. To apply the changes immediately, just run the shell commands directly or execute them from the profile using a command such as source $HOME/.profile.
-
Verify that you've installed Go by opening a command prompt and typing the following command:
$ go version
-
Confirm that the command prints the installed version of Go.
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/$(dpkg --print-architecture)/kubectl" && chmod +x kubectl && sudo mv ./kubectl /usr/local/bin/kubectl
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
visit https://go.dev/doc/install for latest instructions
-
Remove any previous Go installation by deleting the /usr/local/go folder (if it exists), then extract the archive you just downloaded into /usr/local, creating a fresh Go tree in /usr/local/go:
$ rm -rf /usr/local/go && tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz
(You may need to run the command as root or through sudo).
Do not untar the archive into an existing /usr/local/go tree. This is known to produce broken Go installations.
-
Add /usr/local/go/bin to the PATH environment variable. You can do this by adding the following line to your $HOME/.profile or /etc/profile (for a system-wide installation):
export PATH=$PATH:/usr/local/go/bin
Note: Changes made to a profile file may not apply until the next time you log into your computer. To apply the changes immediately, just run the shell commands directly or execute them from the profile using a command such as source $HOME/.profile.
-
Verify that you've installed Go by opening a command prompt and typing the following command:
$ go version
-
Confirm that the command prints the installed version of Go.
# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" && chmod +x kubectl && mv ./kubectl /usr/local/bin/kubectl
# for ARM64 / aarch64
[ $(uname -m) = aarch64 ] && curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl" && chmod +x kubectl && mv ./kubectl /usr/local/bin/kubectl
visit https://go.dev/doc/install for latest instructions
-
Remove any previous Go installation by deleting the /usr/local/go folder (if it exists), then extract the archive you just downloaded into /usr/local, creating a fresh Go tree in /usr/local/go:
$ rm -rf /usr/local/go && tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz
(You may need to run the command as root or through sudo).
Do not untar the archive into an existing /usr/local/go tree. This is known to produce broken Go installations.
-
Add /usr/local/go/bin to the PATH environment variable. You can do this by adding the following line to your $HOME/.profile or /etc/profile (for a system-wide installation):
export PATH=$PATH:/usr/local/go/bin
Note: Changes made to a profile file may not apply until the next time you log into your computer. To apply the changes immediately, just run the shell commands directly or execute them from the profile using a command such as source $HOME/.profile.
-
Verify that you've installed Go by opening a command prompt and typing the following command:
$ go version
-
Confirm that the command prints the installed version of Go.
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
curl.exe -LO "https://dl.k8s.io/release/v1.27.2/bin/windows/amd64/kubectl.exe"
choco install kubernetes-helm
visit https://go.dev/doc/install for latest instructions
-
Download the go 1.19 MSI package from https://go.dev/dl#go1.19 Be sure to get the correct one for your architecture
-
Open the MSI file you downloaded and follow the prompts to install Go.
By default, the installer will install Go to Program Files or Program Files (x86). You can change the location as needed. After installing, you will need to close and reopen any open command prompts so that changes to the environment made by the installer are reflected at the command prompt.
-
Verify that you've installed Go:
-
In Windows, click the Start menu.
-
In the menu's search box, type cmd, then press the Enter key.
-
In the Command Prompt window that appears, type the following command:
$ go version
-
Confirm that the command prints the installed version of Go.
-
How to install pre-requisites for a Windows Subsystem for Linux (WSL) envronment using an Ubuntu 22.04.01 distribution
(Tested on a Intel(R) Core(TM) i7-9850H CPU @ 2.60GHz 2.59 GHz with 32GB RAM, a 64-bit operating system, x64-based processor Using Windows 11 Enterprise)
1. If you're using a VPN, turn it off
2. Install Ubuntu into WSL
2.0 If wsl is not yet installed, open a powershell administrator window and run the following
2.1 reboot your system
2.2 In a Windows command terminal run the following to list all the linux distributions that are available online
2.3 Select a linux distribution and install it into WSL
You will see something like:Installing, this may take a few minutes...
Please create a default UNIX user account. The username does not need to match your Windows username.
For more information visit: https://aka.ms/wslusers
Enter new UNIX username:
2.4 Enter your new username and password at the prompts, and you will eventually see something like:
2.5 Click on the Windows "Start" icon and type in the name of your distribution into the search box. Your new linux distribution should appear as a local "App". You can pin it to the Windows task bar or to Start for your future convenience.
Start a VM using your distribution by clicking on the App.3. Install pre-requisites into your new VM
3.1 update and apply apt-get packages
3.2 Install golang
wget https://golang.org/dl/go1.19.linux-amd64.tar.gz
sudo tar -zxvf go1.19.linux-amd64.tar.gz -C /usr/local
echo export GOROOT=/usr/local/go | sudo tee -a /etc/profile
echo export PATH="$PATH:/usr/local/go/bin" | sudo tee -a /etc/profile
source /etc/profile
go version
3.3 Install ko (but don't do ko set action step)
3.4 Install gcc
Either run this: or this:3.5 Install make (if you installed build-essential this may already be installed)
3.6 Install jq
3.7 install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
3.8 install helm (required when deploying as workload)
Required Packages for the example usage:
You will need the following tools for the example usage of KubeStellar in this quickstart example. Select the tab for your environment for suggested commands to install them
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the repository to Apt sources:
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
# Install packages
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
yum -y install epel-release && yum -y install docker && systemctl enable --now docker && systemctl status docker
# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-linux-amd64
# For ARM64
[ $(uname -m) = aarch64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-linux-arm64
chmod +x ./kind && sudo mv ./kind /usr/local/bin/kind
How to install docker and kind into a Windows Subsystem for Linux (WSL) environment using an Ubuntu 22.04.01 distribution
1.0 Start a VM terminal by clicking on the App you configured using the instructions in the General pre-requisites described above.
2.0 Install docker
The installation instructions from docker are not sufficient to get docker working with WSL2.1 Follow instructions here to install docker https://docs.docker.com/engine/install/ubuntu/
Here some additional steps you will need to take:
2.2 Ensure that /etc/wsl.conf is configured so that systemd will run on booting.
If /etc/wsl.conf does not contain [boot] systemd=true, then edit /etc/wsl.com as follows: Insert2.3 Edit /etc/sudoers: it is strongly recommended to not add directives directly to /etc/sudoers, but instead to put them in files in /etc/sudoers.d which are auto-included. So make/modify a new file via
Insert2.4 Add your user to the docker group
2.5 If dockerd is already running, then stop it and restart it as follows (note: the new dockerd instance will be running in the foreground):
2.5.1 If you encounter an iptables issue, which is described here: https://github.com/microsoft/WSL/issues/6655 The following commands will fix the issue:
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
sudo dockerd &
3. You will now need to open new terminals to access the VM since dockerd is running in the foreground of this terminal
3.1 In your new terminal, install kind
This document is 'docs-ecutable' - you can 'run' this document, just like we do in our testing, on your local environment
This doc shows a detailed example usage of the KubeStellar components.
This example involves two edge clusters and two workloads. One workload goes on both edge clusters and one workload goes on only one edge cluster. Nothing changes after the initial activity.
This example is presented in stages. The controllers involved are always maintaining relationships. This document focuses on changes as they appear in this example.
Stage 1#
Stage 1 creates the infrastructure and the edge service provider workspace (ESPW) and lets that react to the inventory. Then the KubeStellar syncers are deployed, in the edge clusters and configured to work with the corresponding mailbox workspaces. This stage has the following steps.
Create two kind clusters.#
This example uses two kind clusters as edge clusters. We will call them "florin" and "guilder".
This example uses extremely simple workloads, which
use hostPort
networking in Kubernetes. To make those ports easily
reachable from your host, this example uses an explicit kind
configuration for each edge cluster.
For the florin cluster, which will get only one workload, create a
file named florin-config.yaml
with the following contents. In a
kind
config file, containerPort
is about the container that is
also a host (a Kubernetes node), while the hostPort
is about the
host that hosts that container.
cat > florin-config.yaml << EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 8081
hostPort: 8094
EOF
For the guilder cluster, which will get two workloads, create a file
named guilder-config.yaml
with the following contents. The workload
that uses hostPort 8081 goes in both clusters, while the workload that
uses hostPort 8082 goes only in the guilder cluster.
cat > guilder-config.yaml << EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 8081
hostPort: 8096
- containerPort: 8082
hostPort: 8097
EOF
Finally, create the two clusters with the following two commands,
paying attention to $KUBECONFIG
and, if that's empty,
~/.kube/config
: kind create
will inject/replace the relevant
"context" in your active kubeconfig.
kind create cluster --name florin --config florin-config.yaml
kind create cluster --name guilder --config guilder-config.yaml
Create Kind cluster for space management#
kind create cluster --name sm-mgt
KUBECONFIG=~/.kube/config kubectl config rename-context kind-sm-mgt sm-mgt
export SM_CONFIG=~/.kube/config
export SM_CONTEXT=sm-mgt
The space-manager controller#
You can get the latest version from GitHub with the following command,
which will get you the default branch (which is named "main"); add -b
$branch
to the git
command in order to get a different branch.
Use the following commands to build and add the executables to your
$PATH
.
Deploy kcp and KubeStellar#
You need kcp and KubeStellar and can deploy them in either of two ways: as bare processes on whatever host you are using to run this example, or as workload in a Kubernetes cluster (an OpenShift cluster qualifies). Do one or the other, not both.
KubeStellar only works with release v0.11.0
of kcp.
Deploy kcp and KubeStellar as bare processes#
Start kcp#
Download and build or install kcp, according to your preference. See the start of the kcp quickstart for instructions on that, but get release v0.11.0 rather than the latest (the downloadable assets appear after the long list of changes/features/etc).
Clone the v0.11.0 branch of the kcp source:
and build the kubectl-ws binary and include it in$PATH
Running the kcp server creates a hidden subdirectory named .kcp
to
hold all sorts of state related to the server. If you have run it
before and want to start over from scratch then you should rm -rf
.kcp
first.
Use the following commands to: (a) run the kcp server in a forked
command, (b) update your KUBECONFIG
environment variable to
configure kubectl
to use the kubeconfig produced by the kcp server,
and (c) wait for the kcp server to get through some
initialization. The choice of -v=3
for the kcp server makes it log a
line for every HTTP request (among other things).
kcp start -v=3 &> /tmp/kcp.log &
export KUBECONFIG=$(pwd)/.kcp/admin.kubeconfig
popd
# wait until KCP is ready checking availability of ws resource
while ! kubectl ws tree &> /dev/null; do
sleep 10
done
Note that you now care about two different kubeconfig files: the one
that you were using earlier, which holds the contexts for your kind
clusters, and the one that the kcp server creates. The remainder of
this document assumes that your kind
cluster contexts are in
~/.kube/config
.
Create a space provider description for KCP#
Space provider for KCP will allow you to use KCP as backend provider for spaces. Use the following commands to create a provider secret for KCP access and a space provider definition.
KUBECONFIG=$SM_CONFIG kubectl --context sm-mgt create secret generic kcpsec --from-file=kubeconfig=$KUBECONFIG --from-file=incluster=$KUBECONFIG
KUBECONFIG=$SM_CONFIG kubectl --context sm-mgt apply -f - <<EOF
apiVersion: space.kubestellar.io/v1alpha1
kind: SpaceProviderDesc
metadata:
name: default
spec:
ProviderType: "kcp"
SpacePrefixForDiscovery: "ks-"
secretRef:
namespace: default
name: kcpsec
EOF
Next, use the following command to wait for the space-manger to process the provider.
KUBECONFIG=$SM_CONFIG kubectl --context sm-mgt wait --for=jsonpath='{.status.Phase}'=Ready spaceproviderdesc/default --timeout=90s
Get KubeStellar#
You will need a local copy of KubeStellar. You can either use the pre-built archive (containing executables and config files) from a release or get any desired version from GitHub and build.
Use pre-built archive#
Fetch the archive for your operating system and instruction set
architecture as follows, in which $kubestellar_version
is your
chosen release of KubeStellar (see the releases on
GitHub) and
$os_type
and $arch_type
are chosen according to the list of
"assets" for your chosen release.
curl -SL -o kubestellar.tar.gz "https://github.com/kubestellar/kubestellar/releases/download/${kubestellar_version}/kubestellar_${kubestellar_version}_${os_type}_${arch_type}.tar.gz
tar xzf kubestellar.tar.gz
export PATH=$PWD/bin:$PATH
Get from GitHub#
You can get the latest version from GitHub with the following command,
which will get you the default branch (which is named "main"); add -b
$branch
to the git
command in order to get a different branch.
Use the following commands to build and add the executables to your
$PATH
.
In the following exhibited command lines, the commands described as
"KubeStellar commands" and the commands that start with kubectl
kubestellar
rely on the KubeStellar bin
directory being on the
$PATH
. Alternatively you could invoke them with explicit pathnames.
The kubectl plugin lines use fully specific executables (e.g.,
kubectl kubestellar prep-for-syncer
corresponds to
bin/kubectl-kubestellar-prep_for_syncer
).
Get binaries of kube-bind and dex#
The command below makes kube-bind binaries and dex binary available in $PATH
.
rm -rf kube-bind
git clone https://github.com/waltforme/kube-bind.git && \
pushd kube-bind && \
mkdir bin && \
IGNORE_GO_VERSION=1 go build -o ./bin/example-backend ./cmd/example-backend/main.go && \
git checkout origin/syncmore && \
IGNORE_GO_VERSION=1 go build -o ./bin/konnector ./cmd/konnector/main.go && \
git checkout origin/autobind && \
IGNORE_GO_VERSION=1 go build -o ./bin/kubectl-bind ./cmd/kubectl-bind/main.go && \
export PATH=$(pwd)/bin:$PATH && \
popd && \
git clone https://github.com/dexidp/dex.git && \
pushd dex && \
IGNORE_GO_VERSION=1 make build && \
export PATH=$(pwd)/bin:$PATH && \
popd
Initialize the KubeStellar platform as bare processes#
In this step KubeStellar creates and populates the Edge Service
Provider Workspace (ESPW), which exports the KubeStellar API, and also
augments the root:compute
workspace from kcp TMC as needed here.
That augmentation consists of adding authorization to update the
relevant /status
and /scale
subresources (missing in kcp TMC) and
extending the supported subset of the Kubernetes API for managing
containerized workloads from the four resources built into kcp TMC
(Deployment
, Pod
, Service
, and Ingress
) to the other ones that
are meaningful in KubeStellar.
Deploy kcp and KubeStellar as a workload in a Kubernetes cluster#
(This style of deployment requires release v0.6 or later of KubeStellar.)
You need a Kubernetes cluster; see the documentation for kubectl kubestellar deploy
for more information.
You will need a domain name that, on each of your clients, resolves to an IP address that the client can use to open a TCP connection to the Ingress controller's listening socket.
You will need the kcp kubectl
plugins. See the "Start kcp" section
above for instructions on how to get all of the kcp
executables.
You will need to get a build of KubeStellar. See above.
To do the deployment and prepare to use it you will be using the
commands defined for
that. These
require your shell to be in a state where kubectl
manipulates the
hosting cluster (the Kubernetes cluster into which you want to deploy
kcp and KubeStellar), either by virtue of having set your KUBECONFIG
envar appropriately or putting the relevant contents in
~/.kube/config
or by passing --kubeconfig
explicitly on the
following command lines.
Use the kubectl kubestellar deploy command to do the deployment.
Then use the kubectl kubestellar get-external-kubeconfig
command to put
into a file the kubeconfig that you will use as a user of kcp and
KubeStellar. Do not overwrite the kubeconfig file for your hosting
cluster. But do update your KUBECONFIG
envar setting or remember
to pass the new file with --kubeconfig
on the command lines when
using kcp or KubeStellar. For example, you might use the following
commands to fetch and start using that kubeconfig file; the first
assumes that you deployed the core into a Kubernetes namespace named
"kubestellar".
kubectl kubestellar get-external-kubeconfig -n kubestellar -o kcs.kubeconfig
export KUBECONFIG=$(pwd)/kcs.kubeconfig
Note that you now care about two different kubeconfig files: the one
that you were using earlier, which holds the contexts for your kind
clusters, and the one that you just fetched and started using for
working with the KubeStellar interface. The remainder of this document
assumes that your kind
cluster contexts are in ~/.kube/config
.
Create SyncTarget and Location objects to represent the florin and guilder clusters#
Use the following two commands to put inventory objects in the IMW at
root:imw1
that was automatically created during deployment of
KubeStellar. They label both florin and guilder with env=prod
, and
also label guilder with extended=yes
.
imw1_space_config="${PWD}/temp-space-config/spaceprovider-default-imw1"
kubectl-kubestellar-get-config-for-space --space-name imw1 --provider-name default --sm-core-config $SM_CONFIG --space-config-file $imw1_space_config
KUBECONFIG=$imw1_space_config kubectl kubestellar ensure location florin loc-name=florin env=prod --imw imw1
KUBECONFIG=$imw1_space_config kubectl kubestellar ensure location guilder loc-name=guilder env=prod extended=yes --imw imw1
echo "describe the florin location object"
KUBECONFIG=$imw1_space_config kubectl describe location.edge.kubestellar.io florin
Those two script invocations are equivalent to creating the following
four objects plus the kcp APIBinding
objects that import the
definition of the KubeStellar API.
apiVersion: edge.kubestellar.io/v2alpha1
kind: SyncTarget
metadata:
name: florin
labels:
id: florin
loc-name: florin
env: prod
---
apiVersion: edge.kubestellar.io/v2alpha1
kind: Location
metadata:
name: florin
labels:
loc-name: florin
env: prod
spec:
resource: {group: edge.kubestellar.io, version: v2alpha1, resource: synctargets}
instanceSelector:
matchLabels: {id: florin}
---
apiVersion: edge.kubestellar.io/v2alpha1
kind: SyncTarget
metadata:
name: guilder
labels:
id: guilder
loc-name: guilder
env: prod
extended: yes
---
apiVersion: edge.kubestellar.io/v2alpha1
kind: Location
metadata:
name: guilder
labels:
loc-name: guilder
env: prod
extended: yes
spec:
resource: {group: edge.kubestellar.io, version: v2alpha1, resource: synctargets}
instanceSelector:
matchLabels: {id: guilder}
That script also deletes the Location named default
, which is not
used in this PoC, if it shows up.
The mailbox controller#
The mailbox controller is one of the central controllers of KubeStellar. If you have deployed the KubeStellar core as Kubernetes workload then this controller is already running in a pod in your hosting cluster. If instead you are running these controllers as bare processes then launch this controller as follows.
# TODO: mailbox controller has kcp dependencies. Will remove when controllers support spaces.
kubectl ws root:espw
mailbox-controller -v=2 &
sleep 20
This controller is in charge of maintaining the collection of mailbox
workspaces, which are an implementation detail not intended for user
consumption. You can use the following command to wait for the
appearance of the mailbox workspaces implied by the florin and guilder
SyncTarget
objects that you made earlier.
# TODO: leaving this here so we get a list of all the workspaces. Once all the workspaces, including the ones created by the mailbox are converted to spaces, we'll remove this ws."
kubectl ws root
while [ $(kubectl ws tree | grep "\-mb\-" | wc -l) -ne 2 ]; do
sleep 10
done
If it is working correctly, lines like the following will appear in the controller's log (which is being written into your shell if you ran the controller as a bare process above, otherwise you can fetch as directed).
...
I0721 17:37:10.186848 189094 main.go:206] "Found APIExport view" exportName="e
dge.kubestellar.io" serverURL="https://10.0.2.15:6443/services/apiexport/cseslli1ddit3s
a5/edge.kubestellar.io"
...
I0721 19:17:21.906984 189094 controller.go:300] "Created APIBinding" worker=1
mbwsName="1d55jhazpo3d3va6-mb-551bebfd-b75e-47b1-b2e0-ff0a4cb7e006" mbwsCluster
="32x6b03ixc49cj48" bindingName="bind-edge" resourceVersion="1247"
...
I0721 19:18:56.203057 189094 controller.go:300] "Created APIBinding" worker=0
mbwsName="1d55jhazpo3d3va6-mb-732cf72a-1ca9-4def-a5e7-78fd0e36e61c" mbwsCluster
="q31lsrpgur3eg9qk" bindingName="bind-edge" resourceVersion="1329"
^C
You need a -v
setting of 2 or numerically higher to get log messages
about individual mailbox workspaces.
A mailbox workspace name is distinguished by -mb-
separator.
You can get a listing of those mailbox workspaces as follows.
# TODO: currently some workspaces are not created as spaces, specifically the mailbox workspaces, so leaving this code.
kubectl ws root
kubectl get Workspaces
NAME TYPE REGION PHASE URL AGE
1t82bk54r6gjnzsp-mb-1a045336-8178-4026-8a56-5cd5609c0ec1 universal Ready https://192.168.58.123:6443/clusters/1najcltzt2nqax47 50s
1t82bk54r6gjnzsp-mb-f0a82ab1-63f4-49ea-954d-3a41a35a9f1c universal Ready https://192.168.58.123:6443/clusters/1y7wll1dz806h3sb 50s
compute universal Ready https://172.20.144.39:6443/clusters/root:compute 6m8s
espw organization Ready https://172.20.144.39:6443/clusters/root:espw 2m4s
imw1 organization Ready https://172.20.144.39:6443/clusters/root:imw1 1m9s
More usefully, using custom columns you can get a listing that shows the name of the associated SyncTarget.
# TODO: currently some workspaces are not created as spaces, specifically the mailbox workspaces, so leaving this code
kubectl get Workspace -o "custom-columns=NAME:.metadata.name,SYNCTARGET:.metadata.annotations['edge\.kubestellar\.io/sync-target-name'],CLUSTER:.spec.cluster"
NAME SYNCTARGET CLUSTER
1t82bk54r6gjnzsp-mb-1a045336-8178-4026-8a56-5cd5609c0ec1 florin 1najcltzt2nqax47
1t82bk54r6gjnzsp-mb-f0a82ab1-63f4-49ea-954d-3a41a35a9f1c guilder 1y7wll1dz806h3sb
compute <none> mqnl7r5f56hswewy
espw <none> 2n88ugkhysjbxqp5
imw1 <none> 4d2r9stcyy2qq5c1
Also: if you ever need to look up just one mailbox workspace by SyncTarget name, you could do it as follows.
# TODO: currently some workspaces are not created as spaces, specifically the mailbox workspaces, so leaving this code
GUILDER_WS=$(kubectl get Workspace -o json | jq -r '.items | .[] | .metadata | select(.annotations ["edge.kubestellar.io/sync-target-name"] == "guilder") | .name')
echo The guilder mailbox workspace name is $GUILDER_WS
# TODO: currently some workspaces are not created as spaces, specifically the mailbox workspaces, so leaving this code
FLORIN_WS=$(kubectl get Workspace -o json | jq -r '.items | .[] | .metadata | select(.annotations ["edge.kubestellar.io/sync-target-name"] == "florin") | .name')
echo The florin mailbox workspace name is $FLORIN_WS
Connect guilder edge cluster with its mailbox workspace#
The following command will (a) create, in the mailbox workspace for guilder, an identity and authorizations for the edge syncer and (b) write a file containing YAML for deploying the syncer in the guilder cluster.
Current workspace is "root:imw1".
Current workspace is "root:espw".
Current workspace is "root:espw:1t82bk54r6gjnzsp-mb-f0a82ab1-63f4-49ea-954d-3a41a35a9f1c" (type root:universal).
Creating service account "kubestellar-syncer-guilder-wfeig2lv"
Creating cluster role "kubestellar-syncer-guilder-wfeig2lv" to give service account "kubestellar-syncer-guilder-wfeig2lv"
1. write and sync access to the synctarget "kubestellar-syncer-guilder-wfeig2lv"
2. write access to apiresourceimports.
Creating or updating cluster role binding "kubestellar-syncer-guilder-wfeig2lv" to bind service account "kubestellar-syncer-guilder-wfeig2lv" to cluster role "kubestellar-syncer-guilder-wfeig2lv".
Wrote WEC manifest to guilder-syncer.yaml for namespace "kubestellar-syncer-guilder-wfeig2lv". Use
KUBECONFIG=<workload-execution-cluster-config> kubectl apply -f "guilder-syncer.yaml"
to apply it. Use
KUBECONFIG=<workload-execution-cluster-config> kubectl get deployment -n "kubestellar-syncer-guilder-wfeig2lv" kubestellar-syncer-guilder-wfeig2lv
to verify the syncer pod is running.
Current workspace is "root:espw".
The file written was, as mentioned in the output,
guilder-syncer.yaml
. Next kubectl apply
that to the guilder
cluster. That will look something like the following; adjust as
necessary to make kubectl manipulate your guilder cluster.
namespace/kubestellar-syncer-guilder-wfeig2lv created
serviceaccount/kubestellar-syncer-guilder-wfeig2lv created
secret/kubestellar-syncer-guilder-wfeig2lv-token created
clusterrole.rbac.authorization.k8s.io/kubestellar-syncer-guilder-wfeig2lv created
clusterrolebinding.rbac.authorization.k8s.io/kubestellar-syncer-guilder-wfeig2lv created
secret/kubestellar-syncer-guilder-wfeig2lv created
deployment.apps/kubestellar-syncer-guilder-wfeig2lv created
You might check that the syncer is running, as follows.
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kubestellar-syncer-guilder-saaywsu5 kubestellar-syncer-guilder-saaywsu5 1/1 1 1 52s
kube-system coredns 2/2 2 2 35m
local-path-storage local-path-provisioner 1/1 1 1 35m
Connect florin edge cluster with its mailbox workspace#
Do the analogous stuff for the florin cluster.
Current workspace is "root:imw1".
Current workspace is "root:espw".
Current workspace is "root:espw:1t82bk54r6gjnzsp-mb-1a045336-8178-4026-8a56-5cd5609c0ec1" (type root:universal).
Creating service account "kubestellar-syncer-florin-32uaph9l"
Creating cluster role "kubestellar-syncer-florin-32uaph9l" to give service account "kubestellar-syncer-florin-32uaph9l"
1. write and sync access to the synctarget "kubestellar-syncer-florin-32uaph9l"
2. write access to apiresourceimports.
Creating or updating cluster role binding "kubestellar-syncer-florin-32uaph9l" to bind service account "kubestellar-syncer-florin-32uaph9l" to cluster role "kubestellar-syncer-florin-32uaph9l".
Wrote workload execution cluster (WEC) manifest to florin-syncer.yaml for namespace "kubestellar-syncer-florin-32uaph9l". Use
KUBECONFIG=<workload-execution-cluster-config> kubectl apply -f "florin-syncer.yaml"
to apply it. Use
KUBECONFIG=<workload-execution-cluster-config> kubectl get deployment -n "kubestellar-syncer-florin-32uaph9l" kubestellar-syncer-florin-32uaph9l
to verify the syncer pod is running.
Current workspace is "root:espw".
And deploy the syncer in the florin cluster.
namespace/kubestellar-syncer-florin-32uaph9l created
serviceaccount/kubestellar-syncer-florin-32uaph9l created
secret/kubestellar-syncer-florin-32uaph9l-token created
clusterrole.rbac.authorization.k8s.io/kubestellar-syncer-florin-32uaph9l created
clusterrolebinding.rbac.authorization.k8s.io/kubestellar-syncer-florin-32uaph9l created
secret/kubestellar-syncer-florin-32uaph9l created
deployment.apps/kubestellar-syncer-florin-32uaph9l created
Stage 2#
Stage 2 creates two workloads, called "common" and "special", and lets the Where Resolver react. It has the following steps.
Create and populate the workload management workspace for the common workload#
One of the workloads is called "common", because it will go to both edge clusters. The other one is called "special".
In this example, each workload description goes in its own workload management workspace (WMW). Start by creating a WMW for the common workload, with the following commands.
IN_CLUSTER=false SPACE_MANAGER_KUBECONFIG=$SM_CONFIG kubectl kubestellar ensure wmw wmw-c
wmw_c_space_config=$PWD/temp-space-config/spaceprovider-default-wmw-c
This is equivalent to creating that workspace and then entering it and
creating the following two APIBinding
objects.
apiVersion: apis.kcp.io/v1alpha1
kind: APIBinding
metadata:
name: bind-espw
spec:
reference:
export:
path: root:espw
name: edge.kubestellar.io
---
apiVersion: apis.kcp.io/v1alpha1
kind: APIBinding
metadata:
name: bind-kube
spec:
reference:
export:
path: "root:compute"
name: kubernetes
Next, use kubectl
to create the following workload objects in that
workspace. The workload in this example in an Apache httpd server
that serves up a very simple web page, conveyed via a Kubernetes
ConfigMap that is mounted as a volume for the httpd pod.
kubectl --kubeconfig $wmw_c_space_config apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: commonstuff
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: commonstuff
name: httpd-htdocs
annotations:
edge.kubestellar.io/expand-parameters: "true"
data:
index.html: |
<!DOCTYPE html>
<html>
<body>
This is a common web site.
Running in %(loc-name).
</body>
</html>
---
apiVersion: edge.kubestellar.io/v2alpha1
kind: Customizer
metadata:
namespace: commonstuff
name: example-customizer
annotations:
edge.kubestellar.io/expand-parameters: "true"
replacements:
- path: "$.spec.template.spec.containers.0.env.0.value"
value: '"env is %(env)"'
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
namespace: commonstuff
name: commond
annotations:
edge.kubestellar.io/customizer: example-customizer
spec:
selector: {matchLabels: {app: common} }
template:
metadata:
labels: {app: common}
spec:
containers:
- name: httpd
env:
- name: EXAMPLE_VAR
value: example value
image: library/httpd:2.4
ports:
- name: http
containerPort: 80
hostPort: 8081
protocol: TCP
volumeMounts:
- name: htdocs
readOnly: true
mountPath: /usr/local/apache2/htdocs
volumes:
- name: htdocs
configMap:
name: httpd-htdocs
optional: false
EOF
Finally, use kubectl
to create the following EdgePlacement object.
Its "where predicate" (the locationSelectors
array) has one label
selector that matches both Location objects created earlier, thus
directing the common workload to both edge clusters.
kubectl --kubeconfig $wmw_c_space_config apply -f - <<EOF
apiVersion: edge.kubestellar.io/v2alpha1
kind: EdgePlacement
metadata:
name: edge-placement-c
spec:
locationSelectors:
- matchLabels: {"env":"prod"}
downsync:
- apiGroup: ""
resources: [ configmaps ]
namespaces: [ commonstuff ]
objectNames: [ httpd-htdocs ]
- apiGroup: apps
resources: [ replicasets ]
namespaces: [ commonstuff ]
wantSingletonReportedState: true
upsync:
- apiGroup: "group1.test"
resources: ["sprockets", "flanges"]
namespaces: ["orbital"]
names: ["george", "cosmo"]
- apiGroup: "group2.test"
resources: ["cogs"]
names: ["william"]
EOF
Create and populate the workload management workspace for the special workload#
Use the following kubectl
commands to create the WMW for the special
workload.
IN_CLUSTER=false SPACE_MANAGER_KUBECONFIG=$SM_CONFIG kubectl kubestellar ensure wmw wmw-s
wmw_s_space_config=$PWD/temp-space-config/spaceprovider-default-wmw-s
In this workload we will also demonstrate how to downsync objects
whose kind is defined by a CustomResourceDefinition
object. We will
use the one from the Kubernetes documentation for
CRDs,
modified so that the resource it defines is in the category
all
. First, create the definition object with the following command.
kubectl --kubeconfig $wmw_s_space_config apply -f - <<EOF
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
# name must match the spec fields below, and be in the form: <plural>.<group>
name: crontabs.stable.example.com
spec:
# group name to use for REST API: /apis/<group>/<version>
group: stable.example.com
# list of versions supported by this CustomResourceDefinition
versions:
- name: v1
# Each version can be enabled/disabled by Served flag.
served: true
# One and only one version must be marked as the storage version.
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
image:
type: string
replicas:
type: integer
# either Namespaced or Cluster
scope: Namespaced
names:
# plural name to be used in the URL: /apis/<group>/<version>/<plural>
plural: crontabs
# singular name to be used as an alias on the CLI and for display
singular: crontab
# kind is normally the CamelCased singular type. Your resource manifests use this.
kind: CronTab
# shortNames allow shorter string to match your resource on the CLI
shortNames:
- ct
categories:
- all
EOF
Next, use the following command to wait for the apiserver to process that definition.
kubectl --kubeconfig $wmw_s_space_config wait --for condition=Established crd crontabs.stable.example.com
Next, use kubectl
to create the following workload objects in that
workspace. The APIService
object included here does not contribute
to the httpd workload but is here to demonstrate that APIService
objects can be downsynced.
kubectl --kubeconfig $wmw_s_space_config apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: specialstuff
labels: {special: "yes"}
annotations: {just-for: fun}
---
apiVersion: "stable.example.com/v1"
kind: CronTab
metadata:
name: my-new-cron-object
namespace: specialstuff
spec:
cronSpec: "* * * * */5"
image: my-awesome-cron-image
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: specialstuff
name: httpd-htdocs
annotations:
edge.kubestellar.io/expand-parameters: "true"
data:
index.html: |
<!DOCTYPE html>
<html>
<body>
This is a special web site.
Running in %(loc-name).
</body>
</html>
---
apiVersion: edge.kubestellar.io/v2alpha1
kind: Customizer
metadata:
namespace: specialstuff
name: example-customizer
annotations:
edge.kubestellar.io/expand-parameters: "true"
replacements:
- path: "$.spec.template.spec.containers.0.env.0.value"
value: '"in %(env) env"'
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: specialstuff
name: speciald
annotations:
edge.kubestellar.io/customizer: example-customizer
spec:
selector: {matchLabels: {app: special} }
template:
metadata:
labels: {app: special}
spec:
containers:
- name: httpd
env:
- name: EXAMPLE_VAR
value: example value
image: library/httpd:2.4
ports:
- name: http
containerPort: 80
hostPort: 8082
protocol: TCP
volumeMounts:
- name: htdocs
readOnly: true
mountPath: /usr/local/apache2/htdocs
volumes:
- name: htdocs
configMap:
name: httpd-htdocs
optional: false
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1090.example.my
spec:
group: example.my
groupPriorityMinimum: 360
service:
name: my-service
namespace: my-example
version: v1090
versionPriority: 42
EOF
Finally, use kubectl
to create the following EdgePlacement object.
Its "where predicate" (the locationSelectors
array) has one label
selector that matches only one of the Location objects created
earlier, thus directing the special workload to just one edge cluster.
The "what predicate" explicitly includes the Namespace
object named
"specialstuff", which causes all of its desired state (including
labels and annotations) to be downsynced. This contrasts with the
common EdgePlacement, which does not explicitly mention the
commonstuff
namespace, relying on the implicit creation of
namespaces as needed in the WECs.
kubectl --kubeconfig $wmw_s_space_config apply -f - <<EOF
apiVersion: edge.kubestellar.io/v2alpha1
kind: EdgePlacement
metadata:
name: edge-placement-s
spec:
locationSelectors:
- matchLabels: {"env":"prod","extended":"yes"}
downsync:
- apiGroup: ""
resources: [ configmaps ]
namespaceSelectors:
- matchLabels: {"special":"yes"}
- apiGroup: apps
resources: [ deployments ]
namespaceSelectors:
- matchLabels: {"special":"yes"}
objectNames: [ speciald ]
- apiGroup: apiregistration.k8s.io
resources: [ apiservices ]
objectNames: [ v1090.example.my ]
- apiGroup: stable.example.com
resources: [ crontabs ]
namespaces: [ specialstuff ]
objectNames: [ my-new-cron-object ]
- apiGroup: ""
resources: [ namespaces ]
objectNames: [ specialstuff ]
wantSingletonReportedState: true
upsync:
- apiGroup: "group1.test"
resources: ["sprockets", "flanges"]
namespaces: ["orbital"]
names: ["george", "cosmo"]
- apiGroup: "group3.test"
resources: ["widgets"]
names: ["*"]
EOF
Where Resolver#
In response to each EdgePlacement, the Where Resolver will create a corresponding SinglePlacementSlice object. These will indicate the following resolutions of the "where" predicates.
EdgePlacement | Resolved Where |
---|---|
edge-placement-c | florin, guilder |
edge-placement-s | guilder |
If you have deployed the KubeStellar core in a Kubernetes cluster then the where resolver is running in a pod there. If instead you are running the core controllers are bare processes then you can use the following commands to launch the where-resolver; it requires the ESPW to be the current kcp workspace at start time.
espw_space_config="${PWD}/temp-space-config/spaceprovider-default-espw"
kubectl-kubestellar-get-config-for-space --space-name espw --provider-name default --sm-core-config $SM_CONFIG --space-config-file $espw_space_config
# TODO: where-resolver needs access to multiple configs. Will remove when controllers support spaces.
kubectl ws root:espw
kubestellar-where-resolver &
sleep 10
The following commands wait until the where-resolver has done its job
for the common and special EdgePlacement
objects.
while ! kubectl --kubeconfig $wmw_c_space_config get SinglePlacementSlice &> /dev/null; do
sleep 10
done
while ! kubectl --kubeconfig $wmw_s_space_config get SinglePlacementSlice &> /dev/null; do
sleep 10
done
If things are working properly then you will see log lines like the following (among many others) in the where-resolver's log.
I0423 01:33:37.036752 11305 main.go:212] "Found APIExport view" exportName="edge.kubestellar.io" serverURL="https://192.168.58.123:6443/services/apiexport/7qkse309upzrv0fy/edge.kubestellar.io"
...
I0423 01:33:37.320859 11305 reconcile_on_location.go:192] "updated SinglePlacementSlice" controller="kubestellar-where-resolver" triggeringKind=Location key="apmziqj9p9fqlflm|florin" locationWorkspace="apmziqj9p9fqlflm" location="florin" workloadWorkspace="10l175x6ejfjag3e" singlePlacementSlice="edge-placement-c"
...
I0423 01:33:37.391772 11305 reconcile_on_location.go:192] "updated SinglePlacementSlice" controller="kubestellar-where-resolver" triggeringKind=Location key="apmziqj9p9fqlflm|guilder" locationWorkspace="apmziqj9p9fqlflm" location="guilder" workloadWorkspace="10l175x6ejfjag3e" singlePlacementSlice="edge-placement-c"
Check out a SinglePlacementSlice object as follows.
apiVersion: v1
items:
- apiVersion: edge.kubestellar.io/v2alpha1
destinations:
- cluster: apmziqj9p9fqlflm
locationName: florin
syncTargetName: florin
syncTargetUID: b8c64c64-070c-435b-b3bd-9c0f0c040a54
- cluster: apmziqj9p9fqlflm
locationName: guilder
syncTargetName: guilder
syncTargetUID: bf452e1f-45a0-4d5d-b35c-ef1ece2879ba
kind: SinglePlacementSlice
metadata:
annotations:
kcp.io/cluster: 10l175x6ejfjag3e
creationTimestamp: "2023-04-23T05:33:37Z"
generation: 4
name: edge-placement-c
ownerReferences:
- apiVersion: edge.kubestellar.io/v2alpha1
kind: EdgePlacement
name: edge-placement-c
uid: 199cfe1e-48d9-4351-af5c-e66c83bf50dd
resourceVersion: "1316"
uid: b5db1f9d-1aed-4a25-91da-26dfbb5d8879
kind: List
metadata:
resourceVersion: ""
Also check out the SinglePlacementSlice objects in
root:wmw-s
. It should go similarly, but the destinations
should include only the entry for guilder.
Stage 3#
In Stage 3, in response to the EdgePlacement and SinglePlacementSlice
objects, the placement translator will copy the workload prescriptions
into the mailbox workspaces and create SyncerConfig
objects there.
If you have deployed the KubeStellar core as workload in a Kubernetes cluster then the placement translator is running in a Pod there. If instead you are running the core controllers as bare processes then use the following commands to launch the placement translator; it requires the ESPW to be current at start time.
# TODO: placement-translator needs access to multiple configs. Will remove when controllers support spaces.
kubectl ws root:espw
placement-translator &
sleep 10
The following commands wait for the placement translator to get its job done for this example.
# TODO: unfortunately, the $FLORIN_WS and $GUILDER_WS are mailbox ws names that we are not supporting in the space framework yet
# wait until SyncerConfig, ReplicaSets and Deployments are ready
mbxws=($FLORIN_WS $GUILDER_WS)
for ii in "${mbxws[@]}"; do
kubectl ws root:$ii
# wait for SyncerConfig resource
while ! kubectl get SyncerConfig the-one &> /dev/null; do
sleep 10
done
echo "* SyncerConfig resource exists in mailbox $ii"
# wait for ReplicaSet resource
while ! kubectl get rs &> /dev/null; do
sleep 10
done
echo "* ReplicaSet resource exists in mailbox $ii"
# wait until ReplicaSet in mailbox
while ! kubectl get rs -n commonstuff commond; do
sleep 10
done
echo "* commonstuff ReplicaSet in mailbox $ii"
done
# check for deployment in guilder
while ! kubectl get deploy -A &> /dev/null; do
sleep 10
done
echo "* Deployment resource exists"
while ! kubectl get deploy -n specialstuff speciald; do
sleep 10
done
echo "* specialstuff Deployment in its mailbox"
# wait for crontab CRD to be established
while ! kubectl get crd crontabs.stable.example.com; do sleep 10; done
kubectl wait --for condition=Established crd crontabs.stable.example.com
echo "* CronTab CRD is established in its mailbox"
# wait for my-new-cron-object to be in its mailbox
while ! kubectl get ct -n specialstuff my-new-cron-object; do sleep 10; done
echo "* CronTab my-new-cron-object is in its mailbox"
You can check that the common workload's ReplicaSet objects got to
their mailbox workspaces with the following command. It will list the
two copies of that object, each with an annotation whose key is
kcp.io/cluster
and whose value is the kcp logicalcluster.Name
of
the mailbox workspace; those names appear in the "CLUSTER" column of
the custom-columns listing near the end of the section above about
the mailbox controller.
# TODO: kubestellar-list-syncing-objects has kcp dependencies. Will remove when controllers support spaces.
kubestellar-list-syncing-objects --api-group apps --api-kind ReplicaSet
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
annotations:
edge.kubestellar.io/customizer: example-customizer
kcp.io/cluster: 1y7wll1dz806h3sb
... (lots of other details) ...
name: commond
namespace: commonstuff
spec:
... (the customized spec) ...
status:
... (may be filled in by the time you look) ...
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
annotations:
edge.kubestellar.io/customizer: example-customizer
kcp.io/cluster: 1najcltzt2nqax47
... (lots of other details) ...
name: commond
namespace: commonstuff
spec:
... (the customized spec) ...
status:
... (may be filled in by the time you look) ...
That display should show objects in two different mailbox workspaces; the following command checks that.
# TODO: kubestellar-list-syncing-objects has kcp dependencies. Will remove when controllers support spaces.
test $(kubestellar-list-syncing-objects --api-group apps --api-kind ReplicaSet | grep "^ *kcp.io/cluster: [0-9a-z]*$" | sort | uniq | wc -l) -ge 2
The various APIBinding and CustomResourceDefinition objects involved should also appear in the mailbox workspaces.
# TODO: kubestellar-list-syncing-objects has kcp dependencies. Will remove when controllers support spaces.
test $(kubestellar-list-syncing-objects --api-group apis.kcp.io --api-version v1alpha1 --api-kind APIBinding | grep -cw "name: bind-apps") -ge 2
kubestellar-list-syncing-objects --api-group apis.kcp.io --api-version v1alpha1 --api-kind APIBinding | grep -w "name: bind-kubernetes"
kubestellar-list-syncing-objects --api-group apiextensions.k8s.io --api-kind CustomResourceDefinition | fgrep -w "name: crontabs.stable.example.com"
The APIService
of the special workload should also appear, along
with some error messages about APIService
not being known in the
other mailbox workspaces.
# TODO: kubestellar-list-syncing-objects has kcp dependencies. Will remove when controllers support spaces.
kubestellar-list-syncing-objects --api-group apiregistration.k8s.io --api-kind APIService 2>&1 | grep -v "APIService.*the server could not find the requested resource" | fgrep -w "name: v1090.example.my"
The florin cluster gets only the common workload. Examine florin's
SyncerConfig
as follows. Utilize the name of the mailbox workspace
for florin (which you stored in Stage 1) here.
Current workspace is "root:1t82bk54r6gjnzsp-mb-1a045336-8178-4026-8a56-5cd5609c0ec1" (type root:universal).
apiVersion: edge.kubestellar.io/v2alpha1
kind: SyncerConfig
metadata:
annotations:
kcp.io/cluster: 12299slctppnhjnn
creationTimestamp: "2023-04-23T05:39:56Z"
generation: 3
name: the-one
resourceVersion: "1323"
uid: 8840fee6-37dc-407e-ad01-2ad59389d4ff
spec:
namespaceScope: {}
namespacedObjects:
- apiVersion: v1
group: ""
objectsByNamespace:
- names:
- httpd-htdocs
namespace: commonstuff
resource: configmaps
- apiVersion: v1
group: apps
objectsByNamespace:
- names:
- commond
namespace: commonstuff
resource: replicasets
upsync:
- apiGroup: group1.test
names:
- george
- cosmo
namespaces:
- orbital
resources:
- sprockets
- flanges
- apiGroup: group2.test
names:
- william
resources:
- cogs
status: {}
The guilder cluster gets both the common and special workloads.
Examine guilder's SyncerConfig
object and workloads as follows,
using the mailbox workspace name that you stored in Stage 1.
Current workspace is "root:1t82bk54r6gjnzsp-mb-f0a82ab1-63f4-49ea-954d-3a41a35a9f1c" (type root:universal).
apiVersion: edge.kubestellar.io/v2alpha1
kind: SyncerConfig
metadata:
annotations:
kcp.io/cluster: yk9a66vjms1pi8hu
creationTimestamp: "2023-04-23T05:39:56Z"
generation: 4
name: the-one
resourceVersion: "1325"
uid: 3da056c7-0d5c-45a3-9d91-d04f04415f30
spec:
clusterScope:
- apiVersion: v1
group: ""
objects:
- specialstuff
resource: namespaces
- apiVersion: v1
group: apiextensions.k8s.io
objects:
- crontabs.stable.example.com
resource: customresourcedefinitions
- apiVersion: v1
group: apiregistration.k8s.io
objects:
- v1090.example.my
resource: apiservices
namespaceScope: {}
namespacedObjects:
- apiVersion: v1
group: apps
objectsByNamespace:
- names:
- commond
namespace: commonstuff
resource: replicasets
- apiVersion: v1
group: stable.example.com
objectsByNamespace:
- names:
- my-new-cron-object
namespace: specialstuff
resource: crontabs
- apiVersion: v1
group: apps
objectsByNamespace:
- names:
- speciald
namespace: specialstuff
resource: deployments
- apiVersion: v1
group: ""
objectsByNamespace:
- names:
- httpd-htdocs
namespace: commonstuff
- names:
- httpd-htdocs
namespace: specialstuff
resource: configmaps
upsync:
- apiGroup: group3.test
names:
- '*'
resources:
- widgets
- apiGroup: group1.test
names:
- george
- cosmo
namespaces:
- orbital
resources:
- sprockets
- flanges
- apiGroup: group2.test
names:
- william
resources:
- cogs
status: {}
You can check for specific workload objects here with the following command.
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
specialstuff deployment.apps/speciald 0/0 1 0 12m
NAMESPACE NAME DESIRED CURRENT READY AGE
commonstuff replicaset.apps/commond 0 1 1 7m4s
Stage 4#
In Stage 4, the edge syncer does its thing. Actually, it should have done it as soon as the relevant inputs became available in stage 3. Now we examine what happened.
You can check that the workloads are running in the edge clusters as they should be.
The syncer does its thing between the florin cluster and its mailbox
workspace. This is driven by the SyncerConfig
object named
the-one
in that mailbox workspace.
The syncer does its thing between the guilder cluster and its mailbox
workspace. This is driven by the SyncerConfig
object named
the-one
in that mailbox workspace.
Using the kubeconfig that kind
modified, examine the florin cluster.
Find just the commonstuff
namespace and the commond
Deployment.
( KUBECONFIG=~/.kube/config
let tries=1
while ! kubectl --context kind-florin get ns commonstuff &> /dev/null; do
if (( tries >= 30)); then
echo 'The commonstuff namespace failed to appear in florin!' >&2
exit 10
fi
let tries=tries+1
sleep 10
done
kubectl --context kind-florin get ns
)
NAME STATUS AGE
commonstuff Active 6m51s
default Active 57m
kubestellar-syncer-florin-1t9zgidy Active 17m
kube-node-lease Active 57m
kube-public Active 57m
kube-system Active 57m
local-path-storage Active 57m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
NAMESPACE NAME DESIRED CURRENT READY AGE
commonstuff replicaset.apps/commond 1 1 1 13m
Examine the guilder cluster. Find both workload namespaces, the Deployment, and both ReplicaSets.
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
specialstuff deployment.apps/speciald 1/1 1 1 23m
NAMESPACE NAME DESIRED CURRENT READY AGE
commonstuff replicaset.apps/commond 1 1 1 23m
specialstuff replicaset.apps/speciald-76cdbb69b5 1 1 1 14s
Examine the APIService
objects in the guilder cluster, find the one
named v1090.example.my
. It is broken because it refers to a Service
object that we have not bothered to create.
See the crontab in the guilder cluster.
Examining the common workload in the guilder cluster, for example, will show that the replacement-style customization happened.
...
containers:
- env:
- name: EXAMPLE_VAR
value: env is prod
image: library/httpd:2.4
imagePullPolicy: IfNotPresent
name: httpd
...
Check that the common workload on the florin cluster is working.
let tries=1
while ! curl http://localhost:8094 &> /dev/null; do
if (( tries >= 30 )); then
echo 'The common workload failed to come up on florin!' >&2
exit 10
fi
let tries=tries+1
sleep 10
done
curl http://localhost:8094
Check that the special workload on the guilder cluster is working.
let tries=1
while ! curl http://localhost:8097 &> /dev/null; do
if (( tries >= 30 )); then
echo 'The special workload failed to come up on guilder!' >&2
exit 10
fi
let tries=tries+1
sleep 10
done
curl http://localhost:8097
Check that the common workload on the guilder cluster is working.
let tries=1
while ! curl http://localhost:8096 &> /dev/null; do
if (( tries >= 30 )); then
echo 'The common workload failed to come up on guilder!' >&2
exit 10
fi
let tries=tries+1
sleep 10
done
curl http://localhost:8096
Stage 5#
Singleton reported state return#
The two EdgePlacement
objects above assert that the expected number
of executing copies of their matching workload objects is 1 and
request return of reported state to the WDS when the number of
executing copies is exactly 1.
For the common workload, that assertion is not correct: the number of executing copies should be 2. The assertion causes the actual number of executing copies to be reported. Check that the reported number is 2.
kubectl --kubeconfig $wmw_c_space_config get rs -n commonstuff commond -o yaml | grep 'kubestellar.io/executing-count: "2"' || { kubectl --kubeconfig $wmw_c_space_config get rs -n commonstuff commond -o yaml; false; }
For the special workload, the number of executing copies should be 1. Check that the reported number agrees.
kubectl --kubeconfig $wmw_s_space_config get deploy -n specialstuff speciald -o yaml | grep 'kubestellar.io/executing-count: "1"' || { kubectl --kubeconfig $wmw_s_space_config get deploy -n specialstuff speciald -o yaml; false; }
Look at the status section of the "speciald" Deployment
and see that
it has been filled in with the information from the guilder cluster.
Current status might not be there yet. The following command waits for status that reports that there is a special workload pod "ready".
let count=1
while true; do
rsyaml=$(kubectl --kubeconfig $wmw_s_space_config get deploy -n specialstuff speciald -o yaml)
if grep 'readyReplicas: 1' <<<"$rsyaml"
then break
fi
echo ""
echo "Got:"
cat <<<"$rsyaml"
if (( count > 5 )); then
echo 'Giving up!' >&2
false
fi
sleep 15
let count=count+1
done
Status Summarization (aspirational)#
The status summarizer, driven by the EdgePlacement and SinglePlacementSlice for the special workload, creates a status summary object in the specialstuff namespace in the special workload workspace holding a summary of the corresponding Deployment objects. In this case there is just one such object, in the mailbox workspace for the guilder cluster.
The status summarizer, driven by the EdgePlacement and
SinglePlacementSlice for the common workload, creates a status summary
object in the commonstuff namespace in the common workload workspace
holding a summary of the corresponding Deployment objects. Those are
the commond
Deployment objects in the two mailbox workspaces.
Teardown the environment#
To remove the example usage, delete the IMW and WMW and kind clusters run the following commands:
rm florin-syncer.yaml guilder-syncer.yaml || true
kubectl ws root
kubectl delete workspace imw1
kubectl delete workspace $FLORIN_WS
kubectl delete workspace $GUILDER_WS
kubectl kubestellar remove wmw wmw-c
kubectl kubestellar remove wmw wmw-s
kind delete cluster --name florin
kind delete cluster --name guilder
Teardown of KubeStellar depends on which style of deployment was used.
Teardown bare processes#
The following command will stop whatever KubeStellar controllers are running.
Stop and uninstall KubeStellar and kcp with the following command:
Teardown Kubernetes workload#
With kubectl
configured to manipulate the hosting cluster, the following command will remove the workload that is kcp and KubeStellar.