Skip to content

Quickstart using OpenShift

How to deploy and use KUBESTELLAR on Red Hat OpenShift Kubernetes Clusters#

This guide will show how to:

  1. quickly deploy the KubeStellar Core component on an OpenShift cluster using helm (ks-core),
  2. install the KubeStellar user commands and kubectl plugins on your computer with brew,
  3. retrieve the KubeStellar Core component kubeconfig,
  4. install the KubeStellar Syncer component on two edge OpenShift clusters (ks-edge-cluster1 and ks-edge-cluster2),
  5. deploy an example kubernetes workload to both edge OpenShift clusters from KubeStellar Core (ks-core),
  6. view the example kubernetes workload running on two edge OpenShift clusters (ks-edge-cluster1 and ks-edge-cluster2)
  7. view the status of your deployment across both edge OpenShift clusters from KubeStellar Core (ks-core)

important: For this quickstart you will need to know how to use kubernetes' kubeconfig context to access multiple clusters. You can learn more about kubeconfig context here

  • kubectl (version range expected: 1.24-1.26)

  • helm - to deploy the KubeStellar-core helm chart

  • brew - to install the KubeStellar user commands and kubectl plugins

  • 3 Red Hat OpenShift clusters - we will refer to them as ks-core, ks-edge-cluster1, and ks-edge-cluster2 in this document

important: delete and existing kubernetes contexts of the clusters you may have created previously

KUBECONFIG=~/.kube/config kubectl config delete-context ks-core || true
KUBECONFIG=~/.kube/config kubectl config delete-context ks-edge-cluster1 || true
KUBECONFIG=~/.kube/config kubectl config delete-context ks-edge-cluster2 || true

important: alias the kubernetes contexts of the OpenShift clusters you provided to match their use in this guide

oc login <ks-core OpenShift cluster>
CURRENT_CONTEXT=$(KUBECONFIG=~/.kube/config kubectl config current-context) \
    && KUBECONFIG=~/.kube/config kubectl config set-context ks-core \
    --namespace=$(echo "$CURRENT_CONTEXT" | awk -F '/' '{print $1}') \
    --cluster=$(echo "$CURRENT_CONTEXT" | awk -F '/' '{print $2}') \
    --user=$(echo "$CURRENT_CONTEXT" | awk -F '/' '{print $3"/"$2}')

oc login <ks-edge-cluster1 OpenShift cluster>
CURRENT_CONTEXT=$(KUBECONFIG=~/.kube/config kubectl config current-context) \
    && KUBECONFIG=~/.kube/config kubectl config set-context ks-edge-cluster1 \
    --namespace=$(echo "$CURRENT_CONTEXT" | awk -F '/' '{print $1}') \
    --cluster=$(echo "$CURRENT_CONTEXT" | awk -F '/' '{print $2}') \
    --user=$(echo "$CURRENT_CONTEXT" | awk -F '/' '{print $3"/"$2}')

oc login <ks-edge-cluster2 OpenShift cluster>
CURRENT_CONTEXT=$(KUBECONFIG=~/.kube/config kubectl config current-context) \
    && KUBECONFIG=~/.kube/config kubectl config set-context ks-edge-cluster2 \
    --namespace=$(echo "$CURRENT_CONTEXT" | awk -F '/' '{print $1}') \
    --cluster=$(echo "$CURRENT_CONTEXT" | awk -F '/' '{print $2}') \
    --user=$(echo "$CURRENT_CONTEXT" | awk -F '/' '{print $3"/"$2}')

1. Deploy the KUBESTELLAR Core component#

deploy the KubeStellar Core components on the ks-core OpenShift cluster

KUBECONFIG=~/.kube/config kubectl config use-context ks-core  
KUBECONFIG=~/.kube/config kubectl create namespace kubestellar  

helm repo add kubestellar https://helm.kubestellar.io
helm repo update
KUBECONFIG=~/.kube/config helm install kubestellar/kubestellar-core \
  --set clusterType=OpenShift \
  --namespace kubestellar \
  --generate-name

run the following to wait for KubeStellar to be ready to take requests:

echo -n 'Waiting for KubeStellar to be ready'
while ! KUBECONFIG=~/.kube/config kubectl exec $(KUBECONFIG=~/.kube/config kubectl get pod \
   --selector=app=kubestellar -o jsonpath='{.items[0].metadata.name}' -n kubestellar) \
   -n kubestellar -c init -- ls /home/kubestellar/ready &> /dev/null; do
   sleep 10
   echo -n "."
done

echo; echo; echo "KubeStellar is now ready to take requests"

Checking the initialization log to see if there are any obvious errors:

KUBECONFIG=~/.kube/config kubectl config use-context ks-core  
kubectl logs \
  $(kubectl get pod --selector=app=kubestellar \
  -o jsonpath='{.items[0].metadata.name}' -n kubestellar) \
  -n kubestellar -c init
if there is nothing obvious, open a bug report and we can help you out

2. Install KubeStellar's user commands and kubectl plugins#

if ! command -v brew; then
    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
    (echo; echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"') >> /home/runner/.bashrc
    eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"
    more /etc/hosts
    # sudo echo $(curl https://api.ipify.org) kubestellar.core | sudo tee -a /etc/host
fi
brew tap kubestellar/kubestellar
brew update
brew install kcp-cli
brew install kubestellar-cli


brew remove kubestellar-cli
brew remove kcp-cli
brew untap kubestellar/kubestellar

The following commands will (a) download the kcp and KubeStellar executables into subdirectories of your current working directory

bash <(curl -s https://raw.githubusercontent.com/kubestellar/kubestellar/main/bootstrap/bootstrap-kubestellar.sh) \
    --kubestellar-version v0.14.0 --deploy false

export PATH="$PATH:$(pwd)/kcp/bin:$(pwd)/kubestellar/bin"

3. View your KUBESTELLAR Core Space environment#

Let's store the KubeStellar kubeconfig to a file we can reference later and then check out the Spaces KubeStellar created during installation

KUBECONFIG=~/.kube/config kubectl --context ks-core get secrets kubestellar \
  -o jsonpath='{.data.external\.kubeconfig}' \
  -n kubestellar | base64 -d > ks-core.kubeconfig

KUBECONFIG=ks-core.kubeconfig kubectl ws --context root tree

4. Install KUBESTELLAR Syncers on your Edge Clusters#

prepare KubeStellar Syncers, with kubestellar prep-for-cluster, for ks-edge-cluster1 and ks-edge-cluster2 and then apply the files that kubestellar prep-for-cluster prepared for you

KUBECONFIG=ks-core.kubeconfig kubectl kubestellar prep-for-cluster --imw root:imw1 ks-edge-cluster1 \
  env=ks-edge-cluster1 \
  location-group=edge     #add ks-edge-cluster1 and ks-edge-cluster2 to the same group

KUBECONFIG=ks-core.kubeconfig kubectl kubestellar prep-for-cluster --imw root:imw1 ks-edge-cluster2 \
  env=ks-edge-cluster2 \
  location-group=edge     #add ks-edge-cluster1 and ks-edge-cluster2 to the same group
#apply ks-edge-cluster1 syncer
KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster1 apply -f ks-edge-cluster1-syncer.yaml
sleep 3
KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster1 get pods -A | grep kubestellar  #check if syncer deployed to ks-edge-cluster1 correctly

#apply ks-edge-cluster2 syncer
KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster2 apply -f ks-edge-cluster2-syncer.yaml
sleep 3
KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster2 get pods -A | grep kubestellar  #check if syncer deployed to ks-edge-cluster2 correctly

5. Deploy an Apache Web Server to ks-edge-cluster1 and ks-edge-cluster2#

KubeStellar's helm chart automatically creates a Workload Management Workspace (WMW) for you to store kubernetes workload descriptions and KubeStellar control objects in. The automatically created WMW is at root:wmw1.

Create an EdgePlacement control object to direct where your workload runs using the 'location-group=edge' label selector. This label selector's value ensures your workload is directed to both clusters, as they were labeled with 'location-group=edge' when you issued the 'kubestellar prep-for-cluster' command above.

This EdgePlacement includes downsync of a RoleBinding that grants privileges that let the httpd pod run in an OpenShift cluster.

In the root:wmw1 workspace create the following EdgePlacement object:

KUBECONFIG=ks-core.kubeconfig kubectl ws root:wmw1

KUBECONFIG=ks-core.kubeconfig kubectl apply -f - <<EOF
apiVersion: edge.kubestellar.io/v2alpha1
kind: EdgePlacement
metadata:
  name: my-first-edge-placement
spec:
  locationSelectors:
  - matchLabels: {"location-group":"edge"}
  downsync:
  - apiGroup: ""
    resources: [ configmaps ]
    namespaces: [ my-namespace ]
    objectNames: [ "*" ]
  - apiGroup: apps
    resources: [ deployments ]
    namespaces: [ my-namespace ]
    objectNames: [ my-first-kubestellar-deployment ]
  - apiGroup: apis.kcp.io
    resources: [ apibindings ]
    namespaceSelectors: []
    objectNames: [ "bind-kubernetes", "bind-apps" ]
  - apiGroup: rbac.authorization.k8s.io
    resources: [ rolebindings ]
    namespaces: [ my-namespace ]
    objectNames: [ let-it-be ]
EOF

check if your edgeplacement was applied to the ks-core kubestellar namespace correctly

KUBECONFIG=ks-core.kubeconfig kubectl ws root:wmw1
KUBECONFIG=ks-core.kubeconfig kubectl get edgeplacements -n kubestellar -o yaml

Now, apply the HTTP server workload definition into the WMW on ks-core. Note the namespace label matches the label in the namespaceSelector for the EdgePlacement (my-first-edge-placement) object created above.

KUBECONFIG=ks-core.kubeconfig kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: my-namespace
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: my-namespace
  name: httpd-htdocs
data:
  index.html: |
    <!DOCTYPE html>
    <html>
      <body>
        This web site is hosted on ks-edge-cluster1 and ks-edge-cluster2.
      </body>
    </html>
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: my-namespace
  name: my-first-kubestellar-deployment
spec:
  selector: {matchLabels: {app: common} }
  template:
    metadata:
      labels: {app: common}
    spec:
      containers:
      - name: httpd
        image: library/httpd:2.4
        ports:
        - name: http
          containerPort: 80
          protocol: TCP
        volumeMounts:
        - name: htdocs
          readOnly: true
          mountPath: /usr/local/apache2/htdocs
      volumes:
      - name: htdocs
        configMap:
          name: httpd-htdocs
          optional: false
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: let-it-be
  namespace: my-namespace
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:openshift:scc:privileged
subjects:
- kind: ServiceAccount
  name: default
  namespace: my-namespace
---
apiVersion: v1
kind: Service
metadata:
  name: my-service
  namespace: my-namespace
spec:
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: common
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: my-route
  namespace: my-namespace
spec:
  port:
    targetPort: 80
  to:
    kind: Service
    name: my-service
EOF

check if your configmap, deployment, service, and route was applied to the ks-core my-namespace namespace correctly

KUBECONFIG=ks-core.kubeconfig kubectl ws root:wmw1
KUBECONFIG=ks-core.kubeconfig kubectl get deployments/my-first-kubestellar-deployment -n my-namespace -o yaml
KUBECONFIG=ks-core.kubeconfig kubectl get deployments,cm,service,route -n my-namespace

6. View the Apache Web Server running on ks-edge-cluster1 and ks-edge-cluster2#

Now, let's check that the deployment was created in the kind ks-edge-cluster1 cluster (it may take up to 30 seconds to appear):

KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster1 get deployments -A

you should see output including:

NAMESPACE           NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
my-namespace        my-first-kubestellar-deployment    1/1        1            1       6m48s

And, check the ks-edge-cluster2 kind cluster for the same:

KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster2 get deployments -A

you should see output including:

NAMESPACE           NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
my-namespace        my-first-kubestellar-deployment    1/1        1            1       7m54s

Finally, let's check that the workload is working in both clusters: For ks-edge-cluster1:

while [[ $(KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster1 get pod \
  -l "app=common" -n my-namespace -o jsonpath='{.items[0].status.phase}') != "Running" ]]; do 
    sleep 5; 
  done;
curl http://$(KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster1 \
  get route/my-route -n my-namespace -o jsonpath='{.spec.host}')

you should see the output:

<!DOCTYPE html>
<html>
  <body>
    This web site is hosted on ks-edge-cluster1 and ks-edge-cluster2.
  </body>
</html>

For ks-edge-cluster2:

while [[ $(KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster2 get pod \
  -l "app=common" -n my-namespace -o jsonpath='{.items[0].status.phase}') != "Running" ]]; do 
    sleep 5; 
  done;
curl http://$(KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster2 \
  get route/my-route -n my-namespace -o jsonpath='{.spec.host}')

you should see the output:

<!DOCTYPE html>
<html>
  <body>
    This web site is hosted on ks-edge-cluster1 and ks-edge-cluster2.
  </body>
</html>


If you are unable to see the namespace 'my-namespace' or the deployment 'my-first-kubestellar-deployment' you can view the logs for the KubeStellar syncer on the ks-edge-cluster1 Kind cluster:

KUBECONFIG=~/.kube/config kubectl config use-context ks-edge-cluster1
ks_ns_edge_cluster1=$(KUBECONFIG=~/.kube/config kubectl get namespaces \
    -o custom-columns=:metadata.name | grep 'kubestellar-')
KUBECONFIG=~/.kube/config kubectl logs pod/$(kubectl get pods -n $ks_ns_edge_cluster1 \
    -o custom-columns=:metadata.name | grep 'kubestellar-') -n $ks_ns_edge_cluster1

and on the ks-edge-cluster2 Kind cluster:

KUBECONFIG=~/.kube/config kubectl config use-context ks-edge-cluster2
ks_ns_edge_cluster2=$(KUBECONFIG=~/.kube/config kubectl get namespaces \
    -o custom-columns=:metadata.name | grep 'kubestellar-')
KUBECONFIG=~/.kube/config kubectl logs pod/$(kubectl get pods -n $ks_ns_edge_cluster2 \
    -o custom-columns=:metadata.name | grep 'kubestellar-') -n $ks_ns_edge_cluster2

7. Check the status of your Apache Server on ks-edge-cluster1 and ks-edge-cluster2#

TODO

what's next...
how to upsync a resource
how to create, but not overwrite/update a synchronized resource


#



How to use an existing KUBESTELLAR environment#

1. Install KubeStellar's user commands and kubectl plugins#

if ! command -v brew; then
    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
    (echo; echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"') >> /home/runner/.bashrc
    eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"
    more /etc/hosts
    # sudo echo $(curl https://api.ipify.org) kubestellar.core | sudo tee -a /etc/host
fi
brew tap kubestellar/kubestellar
brew update
brew install kcp-cli
brew install kubestellar-cli


brew remove kubestellar-cli
brew remove kcp-cli
brew untap kubestellar/kubestellar

The following commands will (a) download the kcp and KubeStellar executables into subdirectories of your current working directory

bash <(curl -s https://raw.githubusercontent.com/kubestellar/kubestellar/main/bootstrap/bootstrap-kubestellar.sh) \
    --kubestellar-version v0.14.0 --deploy false

export PATH="$PATH:$(pwd)/kcp/bin:$(pwd)/kubestellar/bin"

2. View your KUBESTELLAR Core Space environment#

Let's store the KubeStellar kubeconfig to a file we can reference later and then check out the Spaces KubeStellar created during installation

KUBECONFIG=~/.kube/config kubectl --context ks-core get secrets kubestellar \
  -o jsonpath='{.data.external\.kubeconfig}' \
  -n kubestellar | base64 -d > ks-core.kubeconfig

KUBECONFIG=ks-core.kubeconfig kubectl ws tree

3. Deploy an Apache Web Server to ks-edge-cluster1 and ks-edge-cluster2#

KubeStellar's helm chart automatically creates a Workload Management Workspace (WMW) for you to store kubernetes workload descriptions and KubeStellar control objects in. The automatically created WMW is at root:wmw1.

Create an EdgePlacement control object to direct where your workload runs using the 'location-group=edge' label selector. This label selector's value ensures your workload is directed to both clusters, as they were labeled with 'location-group=edge' when you issued the 'kubestellar prep-for-cluster' command above.

This EdgePlacement includes downsync of a RoleBinding that grants privileges that let the httpd pod run in an OpenShift cluster.

In the root:wmw1 workspace create the following EdgePlacement object:

KUBECONFIG=ks-core.kubeconfig kubectl ws root:wmw1

KUBECONFIG=ks-core.kubeconfig kubectl apply -f - <<EOF
apiVersion: edge.kubestellar.io/v2alpha1
kind: EdgePlacement
metadata:
  name: my-first-edge-placement
spec:
  locationSelectors:
  - matchLabels: {"location-group":"edge"}
  downsync:
  - apiGroup: ""
    resources: [ configmaps ]
    namespaces: [ my-namespace ]
    objectNames: [ "*" ]
  - apiGroup: apps
    resources: [ deployments ]
    namespaces: [ my-namespace ]
    objectNames: [ my-first-kubestellar-deployment ]
  - apiGroup: apis.kcp.io
    resources: [ apibindings ]
    namespaceSelectors: []
    objectNames: [ "bind-kubernetes", "bind-apps" ]
  - apiGroup: rbac.authorization.k8s.io
    resources: [ rolebindings ]
    namespaces: [ my-namespace ]
    objectNames: [ let-it-be ]
EOF

check if your edgeplacement was applied to the ks-core kubestellar namespace correctly

KUBECONFIG=ks-core.kubeconfig kubectl ws root:wmw1
KUBECONFIG=ks-core.kubeconfig kubectl get edgeplacements -n kubestellar -o yaml

Now, apply the HTTP server workload definition into the WMW on ks-core. Note the namespace label matches the label in the namespaceSelector for the EdgePlacement (my-first-edge-placement) object created above.

KUBECONFIG=ks-core.kubeconfig kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: my-namespace
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: my-namespace
  name: httpd-htdocs
data:
  index.html: |
    <!DOCTYPE html>
    <html>
      <body>
        This web site is hosted on ks-edge-cluster1 and ks-edge-cluster2.
      </body>
    </html>
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: my-namespace
  name: my-first-kubestellar-deployment
spec:
  selector: {matchLabels: {app: common} }
  template:
    metadata:
      labels: {app: common}
    spec:
      containers:
      - name: httpd
        image: library/httpd:2.4
        ports:
        - name: http
          containerPort: 80
          protocol: TCP
        volumeMounts:
        - name: htdocs
          readOnly: true
          mountPath: /usr/local/apache2/htdocs
      volumes:
      - name: htdocs
        configMap:
          name: httpd-htdocs
          optional: false
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: let-it-be
  namespace: my-namespace
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:openshift:scc:privileged
subjects:
- kind: ServiceAccount
  name: default
  namespace: my-namespace
---
apiVersion: v1
kind: Service
metadata:
  name: my-service
  namespace: my-namespace
spec:
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: common
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: my-route
  namespace: my-namespace
spec:
  port:
    targetPort: 80
  to:
    kind: Service
    name: my-service
EOF

check if your configmap, deployment, service, and route was applied to the ks-core my-namespace namespace correctly

KUBECONFIG=ks-core.kubeconfig kubectl ws root:wmw1
KUBECONFIG=ks-core.kubeconfig kubectl get deployments/my-first-kubestellar-deployment -n my-namespace -o yaml
KUBECONFIG=ks-core.kubeconfig kubectl get deployments,cm,service,route -n my-namespace

4. View the Apache Web Server running on ks-edge-cluster1 and ks-edge-cluster2#

Now, let's check that the deployment was created in the kind ks-edge-cluster1 cluster (it may take up to 30 seconds to appear):

KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster1 get deployments -A

you should see output including:

NAMESPACE           NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
my-namespace        my-first-kubestellar-deployment    1/1        1            1       6m48s

And, check the ks-edge-cluster2 kind cluster for the same:

KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster2 get deployments -A

you should see output including:

NAMESPACE           NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
my-namespace        my-first-kubestellar-deployment    1/1        1            1       7m54s

Finally, let's check that the workload is working in both clusters: For ks-edge-cluster1:

while [[ $(KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster1 get pod \
  -l "app=common" -n my-namespace -o jsonpath='{.items[0].status.phase}') != "Running" ]]; do 
    sleep 5; 
  done;
curl http://$(KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster1 \
  get route/my-route -n my-namespace -o jsonpath='{.spec.host}')

you should see the output:

<!DOCTYPE html>
<html>
  <body>
    This web site is hosted on ks-edge-cluster1 and ks-edge-cluster2.
  </body>
</html>

For ks-edge-cluster2:

while [[ $(KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster2 get pod \
  -l "app=common" -n my-namespace -o jsonpath='{.items[0].status.phase}') != "Running" ]]; do 
    sleep 5; 
  done;
curl http://$(KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster2 \
  get route/my-route -n my-namespace -o jsonpath='{.spec.host}')

you should see the output:

<!DOCTYPE html>
<html>
  <body>
    This web site is hosted on ks-edge-cluster1 and ks-edge-cluster2.
  </body>
</html>


If you are unable to see the namespace 'my-namespace' or the deployment 'my-first-kubestellar-deployment' you can view the logs for the KubeStellar syncer on the ks-edge-cluster1 Kind cluster:

KUBECONFIG=~/.kube/config kubectl config use-context ks-edge-cluster1
ks_ns_edge_cluster1=$(KUBECONFIG=~/.kube/config kubectl get namespaces \
    -o custom-columns=:metadata.name | grep 'kubestellar-')
KUBECONFIG=~/.kube/config kubectl logs pod/$(kubectl get pods -n $ks_ns_edge_cluster1 \
    -o custom-columns=:metadata.name | grep 'kubestellar-') -n $ks_ns_edge_cluster1

and on the ks-edge-cluster2 Kind cluster:

KUBECONFIG=~/.kube/config kubectl config use-context ks-edge-cluster2
ks_ns_edge_cluster2=$(KUBECONFIG=~/.kube/config kubectl get namespaces \
    -o custom-columns=:metadata.name | grep 'kubestellar-')
KUBECONFIG=~/.kube/config kubectl logs pod/$(kubectl get pods -n $ks_ns_edge_cluster2 \
    -o custom-columns=:metadata.name | grep 'kubestellar-') -n $ks_ns_edge_cluster2

5. Check the status of your Apache Server on ks-edge-cluster1 and ks-edge-cluster2#

Every object subject to downsync or upsync has a full per-WEC copy in the core. These include reported state from the WECs. If you are using release 0.10 or later of KubeStellar then you can list these copies of your httpd Deployment objects with the following command.

kubestellar-list-syncing-objects --api-group apps --api-kind Deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    ... (lots of other details) ...
  name: my-first-kubestellar-deployment
  namespace: my-namespace
spec:
  ... (the spec) ...
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2023-10-27T07:00:19Z"
    lastUpdateTime: "2023-10-27T07:00:19Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2023-10-27T07:00:19Z"
    lastUpdateTime: "2023-10-27T07:00:19Z"
    message: ReplicaSet "my-first-kubestellar-deployment-76f6fc4cfc" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 618
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    ... (lots of other details) ...
  name: my-first-kubestellar-deployment
  namespace: my-namespace
spec:
  ... (the spec) ...
status:
  ... (another happy status) ...