KubeStellar Where Resolver
Required Packages for running and using KubeStellar:
You will need the following tools to deploy and use KubeStellar. Select the tab for your environment for suggested commands to install them
-
curl (omitted from most OS-specific instructions)
-
kubectl (version range expected: 1.23-1.25)
-
helm (required when deploying as workload)
If you intend to build kubestellar from source you will also need
-
go (Go version >=1.19 required; 1.19 recommended) [go releases] (https://go.dev/dl)
-
for simplicity, here's a direct link to go releases Remember you need go v1.19 or greater; 1.19 recommended!
brew install kubectl
-
Download the package from https://go.dev/dl#go1.19 Be sure to get the correct one for your architecture
-
Open the package file you downloaded and follow the prompts to install Go. The package installs the Go distribution to /usr/local/go. The package should put the /usr/local/go/bin directory in your PATH environment variable. You may need to restart any open Terminal sessions for the change to take effect.
-
Verify that you've installed Go by opening a command prompt and typing the following command:
$ go version
Confirm that the command prints the desired installed version of Go.
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/$(dpkg --print-architecture)/kubectl" && chmod +x kubectl && sudo mv ./kubectl /usr/local/bin/kubectl
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
visit https://go.dev/doc/install for latest instructions
-
Remove any previous Go installation by deleting the /usr/local/go folder (if it exists), then extract the archive you just downloaded into /usr/local, creating a fresh Go tree in /usr/local/go:
$ rm -rf /usr/local/go && tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz
(You may need to run the command as root or through sudo).
Do not untar the archive into an existing /usr/local/go tree. This is known to produce broken Go installations.
-
Add /usr/local/go/bin to the PATH environment variable. You can do this by adding the following line to your $HOME/.profile or /etc/profile (for a system-wide installation):
export PATH=$PATH:/usr/local/go/bin
Note: Changes made to a profile file may not apply until the next time you log into your computer. To apply the changes immediately, just run the shell commands directly or execute them from the profile using a command such as source $HOME/.profile.
-
Verify that you've installed Go by opening a command prompt and typing the following command:
$ go version
-
Confirm that the command prints the installed version of Go.
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/$(dpkg --print-architecture)/kubectl" && chmod +x kubectl && sudo mv ./kubectl /usr/local/bin/kubectl
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
visit https://go.dev/doc/install for latest instructions
-
Remove any previous Go installation by deleting the /usr/local/go folder (if it exists), then extract the archive you just downloaded into /usr/local, creating a fresh Go tree in /usr/local/go:
$ rm -rf /usr/local/go && tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz
(You may need to run the command as root or through sudo).
Do not untar the archive into an existing /usr/local/go tree. This is known to produce broken Go installations.
-
Add /usr/local/go/bin to the PATH environment variable. You can do this by adding the following line to your $HOME/.profile or /etc/profile (for a system-wide installation):
export PATH=$PATH:/usr/local/go/bin
Note: Changes made to a profile file may not apply until the next time you log into your computer. To apply the changes immediately, just run the shell commands directly or execute them from the profile using a command such as source $HOME/.profile.
-
Verify that you've installed Go by opening a command prompt and typing the following command:
$ go version
-
Confirm that the command prints the installed version of Go.
# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" && chmod +x kubectl && mv ./kubectl /usr/local/bin/kubectl
# for ARM64 / aarch64
[ $(uname -m) = aarch64 ] && curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl" && chmod +x kubectl && mv ./kubectl /usr/local/bin/kubectl
visit https://go.dev/doc/install for latest instructions
-
Remove any previous Go installation by deleting the /usr/local/go folder (if it exists), then extract the archive you just downloaded into /usr/local, creating a fresh Go tree in /usr/local/go:
$ rm -rf /usr/local/go && tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz
(You may need to run the command as root or through sudo).
Do not untar the archive into an existing /usr/local/go tree. This is known to produce broken Go installations.
-
Add /usr/local/go/bin to the PATH environment variable. You can do this by adding the following line to your $HOME/.profile or /etc/profile (for a system-wide installation):
export PATH=$PATH:/usr/local/go/bin
Note: Changes made to a profile file may not apply until the next time you log into your computer. To apply the changes immediately, just run the shell commands directly or execute them from the profile using a command such as source $HOME/.profile.
-
Verify that you've installed Go by opening a command prompt and typing the following command:
$ go version
-
Confirm that the command prints the installed version of Go.
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
curl.exe -LO "https://dl.k8s.io/release/v1.27.2/bin/windows/amd64/kubectl.exe"
choco install kubernetes-helm
visit https://go.dev/doc/install for latest instructions
-
Download the go 1.19 MSI package from https://go.dev/dl#go1.19 Be sure to get the correct one for your architecture
-
Open the MSI file you downloaded and follow the prompts to install Go.
By default, the installer will install Go to Program Files or Program Files (x86). You can change the location as needed. After installing, you will need to close and reopen any open command prompts so that changes to the environment made by the installer are reflected at the command prompt.
-
Verify that you've installed Go:
-
In Windows, click the Start menu.
-
In the menu's search box, type cmd, then press the Enter key.
-
In the Command Prompt window that appears, type the following command:
$ go version
-
Confirm that the command prints the installed version of Go.
-
How to install pre-requisites for a Windows Subsystem for Linux (WSL) envronment using an Ubuntu 22.04.01 distribution
(Tested on a Intel(R) Core(TM) i7-9850H CPU @ 2.60GHz 2.59 GHz with 32GB RAM, a 64-bit operating system, x64-based processor Using Windows 11 Enterprise)
1. If you're using a VPN, turn it off
2. Install Ubuntu into WSL
2.0 If wsl is not yet installed, open a powershell administrator window and run the following
2.1 reboot your system
2.2 In a Windows command terminal run the following to list all the linux distributions that are available online
2.3 Select a linux distribution and install it into WSL
You will see something like:Installing, this may take a few minutes...
Please create a default UNIX user account. The username does not need to match your Windows username.
For more information visit: https://aka.ms/wslusers
Enter new UNIX username:
2.4 Enter your new username and password at the prompts, and you will eventually see something like:
2.5 Click on the Windows "Start" icon and type in the name of your distribution into the search box. Your new linux distribution should appear as a local "App". You can pin it to the Windows task bar or to Start for your future convenience.
Start a VM using your distribution by clicking on the App.3. Install pre-requisites into your new VM
3.1 update and apply apt-get packages
3.2 Install golang
wget https://golang.org/dl/go1.19.linux-amd64.tar.gz
sudo tar -zxvf go1.19.linux-amd64.tar.gz -C /usr/local
echo export GOROOT=/usr/local/go | sudo tee -a /etc/profile
echo export PATH="$PATH:/usr/local/go/bin" | sudo tee -a /etc/profile
source /etc/profile
go version
3.3 Install ko (but don't do ko set action step)
3.4 Install gcc
Either run this: or this:3.5 Install make (if you installed build-essential this may already be installed)
3.6 Install jq
3.7 install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
3.8 install helm (required when deploying as workload)
Required Packages for the example usage:
You will need the following tools for the example usage of KubeStellar in this quickstart example. Select the tab for your environment for suggested commands to install them
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the repository to Apt sources:
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
# Install packages
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
yum -y install epel-release && yum -y install docker && systemctl enable --now docker && systemctl status docker
# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-linux-amd64
# For ARM64
[ $(uname -m) = aarch64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.14.0/kind-linux-arm64
chmod +x ./kind && sudo mv ./kind /usr/local/bin/kind
How to install docker and kind into a Windows Subsystem for Linux (WSL) environment using an Ubuntu 22.04.01 distribution
1.0 Start a VM terminal by clicking on the App you configured using the instructions in the General pre-requisites described above.
2.0 Install docker
The installation instructions from docker are not sufficient to get docker working with WSL2.1 Follow instructions here to install docker https://docs.docker.com/engine/install/ubuntu/
Here some additional steps you will need to take:
2.2 Ensure that /etc/wsl.conf is configured so that systemd will run on booting.
If /etc/wsl.conf does not contain [boot] systemd=true, then edit /etc/wsl.com as follows: Insert2.3 Edit /etc/sudoers: it is strongly recommended to not add directives directly to /etc/sudoers, but instead to put them in files in /etc/sudoers.d which are auto-included. So make/modify a new file via
Insert2.4 Add your user to the docker group
2.5 If dockerd is already running, then stop it and restart it as follows (note: the new dockerd instance will be running in the foreground):
2.5.1 If you encounter an iptables issue, which is described here: https://github.com/microsoft/WSL/issues/6655 The following commands will fix the issue:
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
sudo dockerd &
3. You will now need to open new terminals to access the VM since dockerd is running in the foreground of this terminal
3.1 In your new terminal, install kind
This document is 'docs-ecutable' - you can 'run' this document, just like we do in our testing, on your local environment
Usage of the Where Resolver#
The Where Resolver needs two Kubernetes client configurations.
The first is needed to access the APIExport view of the edge.kubestellar.io
API group.
It must point to the edge service provider workspace that has this APIExport and
is authorized to read its view for edge APIs.
The second is needed to maintain SinglePlacementSlice
objects in all workload
management workspaces; this should be a client config that is able to read/write
in all clusters. For example, there is a kubeconfig context named base
in the
kubeconfig created by kcp start
which satisfies these requirements.
The command line flags, beyond the basics, are as follows.
--espw-cluster string The name of the kubeconfig cluster to use for access to the edge service provider workspace
--espw-context string The name of the kubeconfig context to use for access to the edge service provider workspace
--espw-kubeconfig string Path to the kubeconfig file to use for access to the edge service provider workspace
--espw-user string The name of the kubeconfig user to use for access to the edge service provider workspace
--base-cluster string The name of the kubeconfig cluster to use for access to all logical clusters as kcp-admin (default "base")
--base-context string The name of the kubeconfig context to use for access to all logical clusters as kcp-admin
--base-kubeconfig string Path to the kubeconfig file to use for access to all logical clusters as kcp-admin
--base-user string The name of the kubeconfig user to use for access to all logical clusters as kcp-admin (default "kcp-admin")
Steps to try the Where Resolver#
Pull the kcp source code, build kcp, and start kcp#
At this point you should have cloned the KubeStellar repo and cd
ed into it as directed above.
Clone the v0.11.0 branch kcp source:
Build the kubectl-ws binary and include it in$PATH
Run kcp (kcp will spit out tons of information and stay running in this terminal window).
Set your KUBECONFIG
environment variable to name the kubernetes client config file that kcp
generates.
Build and initialize KubeStellar#
First build KubeStellar and add the result to your $PATH
.
Next, use the command that makes sure the Edge Service Provider Workspace (ESPW), which is root:espw
, and the TMC provider workspace (root:compute
) are properly set up.
Create the Workload Management Workspace (WMW) and bind it to the ESPW APIs#
Use the user home workspace (\~) as the workload management workspace (WMW).
Bind APIs.
Run the KubeStellar Where Resolver against the ESPW#
Go to the root:espw
workspace and run the Where Resolver.
The outputs from the Where Resolver should be similar to:
I0605 10:53:00.156100 29786 main.go:212] "Found APIExport view" exportName="edge.kubestellar.io" serverURL="https://192.168.1.13:6443/services/apiexport/jxch2kyb3c1h6bac/edge.kubestellar.io"
I0605 10:53:00.157874 29786 main.go:212] "Found APIExport view" exportName="scheduling.kcp.io" serverURL="https://192.168.1.13:6443/services/apiexport/root/scheduling.kcp.io"
I0605 10:53:00.159242 29786 main.go:212] "Found APIExport view" exportName="workload.kcp.io" serverURL="https://192.168.1.13:6443/services/apiexport/root/workload.kcp.io"
I0605 10:53:00.261128 29786 controller.go:201] "starting controller" controller="where-resolver"
Create the Inventory Management Workspace (IMW) and populate it with locations and synctargets#
Use workspace root:compute
as the Inventory Management Workspace (IMW).
Create two Locations and two SyncTargets.
kubectl create -f config/samples/location_prod.yaml
kubectl create -f config/samples/location_dev.yaml
kubectl create -f config/samples/synctarget_prod.yaml
kubectl create -f config/samples/synctarget_dev.yaml
sleep 5
Note that kcp automatically creates a Location default
. So there are 3 Locations and 2 SyncTargets in root:compute
.
NAME RESOURCE AVAILABLE INSTANCES LABELS AGE
location.edge.kubestellar.io/default synctargets 0 2 2m12s
location.edge.kubestellar.io/dev synctargets 0 1 2m39s
location.edge.kubestellar.io/prod synctargets 0 1 3m13s
NAME AGE
synctarget.edge.kubestellar.io/dev 110s
synctarget.edge.kubestellar.io/prod 2m12s
Create some EdgePlacements in the WMW#
Go to Workload Management Workspace (WMW) and create an EdgePlacement all2all
.
The Where Resolver maintains a SinglePlacementSlice for an EdgePlacement in the same workspace.
apiVersion: edge.kubestellar.io/v2alpha1
destinations:
- cluster: 1yotsgod0d2p3xa5
locationName: prod
syncTargetName: prod
syncTargetUID: 13841ffd-33f2-4cf4-9114-6156f73aa5c8
- cluster: 1yotsgod0d2p3xa5
locationName: dev
syncTargetName: dev
syncTargetUID: ea5492ec-44af-4173-a4ca-9c5cd59afcb1
- cluster: 1yotsgod0d2p3xa5
locationName: default
syncTargetName: dev
syncTargetUID: ea5492ec-44af-4173-a4ca-9c5cd59afcb1
- cluster: 1yotsgod0d2p3xa5
locationName: default
syncTargetName: prod
syncTargetUID: 13841ffd-33f2-4cf4-9114-6156f73aa5c8
kind: SinglePlacementSlice
metadata:
annotations:
kcp.io/cluster: kvdk2spgmbix
creationTimestamp: "2023-06-05T14:55:20Z"
generation: 1
name: all2all
ownerReferences:
- apiVersion: edge.kubestellar.io/v2alpha1
kind: EdgePlacement
name: all2all
uid: 31915018-6a25-4f01-943e-b8a0a0ed35ba
resourceVersion: "875"
uid: a2b8224d-5feb-40a1-adb2-67c07965f13b
all2all
selects all the 3 Locations in root:compute
.
Create a more specific EdgePlacement which selects Locations labeled by env: dev
.
The corresponding SinglePlacementSlice has a shorter list of destinations
:
apiVersion: edge.kubestellar.io/v2alpha1
destinations:
- cluster: 1yotsgod0d2p3xa5
locationName: dev
syncTargetName: dev
syncTargetUID: ea5492ec-44af-4173-a4ca-9c5cd59afcb1
kind: SinglePlacementSlice
metadata:
annotations:
kcp.io/cluster: kvdk2spgmbix
creationTimestamp: "2023-06-05T14:57:00Z"
generation: 1
name: dev
ownerReferences:
- apiVersion: edge.kubestellar.io/v2alpha1
kind: EdgePlacement
name: dev
uid: 1ac4b7f5-5521-4b5a-a0fa-cc2ec87b458b
resourceVersion: "877"
uid: c9c0c2fc-d721-4c73-9788-e10711bad23a
Feel free to change the Locations, SyncTargets, and EdgePlacements and see how the Where Resolver reacts.
Your next step is to deliver a workload to a mailbox (that represents an edge location). Go here to take the next step... (TBD)
Teardown the environment#
To remove the example usage, delete the IMW and WMW and kind clusters run the following commands:
rm florin-syncer.yaml guilder-syncer.yaml || true
kubectl ws root
kubectl delete workspace imw1
kubectl kubestellar remove wmw wmw1
kind delete cluster --name florin
kind delete cluster --name guilder
Teardown of KubeStellar depends on which style of deployment was used.
Teardown bare processes#
The following command will stop whatever KubeStellar controllers are running.
Stop and uninstall KubeStellar and the space provider with the following command:
Teardown Kubernetes workload#
With kubectl
configured to manipulate the hosting cluster, the following command will remove the workload that is the space provider and KubeStellar.