Quickstart using Kind
How to deploy and use KUBESTELLAR on Kind Kubernetes Clusters#
This guide will show how to:
- quickly deploy the KubeStellar Core component on a Kind cluster using helm (ks-core),
- install the KubeStellar user commands and kubectl plugins on your computer with brew,
- retrieve the KubeStellar Core component kubeconfig,
- install the KubeStellar Syncer component on two edge Kind clusters (ks-edge-cluster1 and ks-edge-cluster2),
- deploy an example kubernetes workload to both edge Kind clusters from KubeStellar Core (ks-core),
- view the example kubernetes workload running on two edge Kind clusters (ks-edge-cluster1 and ks-edge-cluster2)
- view the status of your deployment across both edge Kind clusters from KubeStellar Core (ks-core)
important: For this quickstart you will need to know how to use kubernetes' kubeconfig context to access multiple clusters. You can learn more about kubeconfig context here
-
kubectl (version range expected: 1.24-1.26)
-
helm - to deploy the KubeStellar-core helm chart
-
brew - to install the KubeStellar user commands and kubectl plugins
-
Kind - to create a few small kubernetes clusters
-
3 Kind clusters configured as follows
create the ks-core kind cluster
KUBECONFIG=~/.kube/config kind create cluster --name ks-core --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 443
hostPort: 1119
protocol: TCP
EOF
Be sure to create an ingress control with SSL passthrough to ks-core. This is a special requirement for Kind that allows access to the KubeStellar core running on ks-core.
KUBECONFIG=~/.kube/config kubectl \
create -f https://raw.githubusercontent.com/kubestellar/kubestellar/main/example/kind-nginx-ingress-with-SSL-passthrough.yaml
sleep 20
KUBECONFIG=~/.kube/config kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=90s
create the ks-edge-cluster1 kind cluster
KUBECONFIG=~/.kube/config kind create cluster --name ks-edge-cluster1 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 8081
hostPort: 8094
EOF
create the ks-edge-cluster2 kind cluster
KUBECONFIG=~/.kube/config kind create cluster --name ks-edge-cluster2 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 8081
hostPort: 8096
- containerPort: 8082
hostPort: 8097
EOF
important: delete and existing kubernetes contexts of the clusters you may have created previously
KUBECONFIG=~/.kube/config kubectl config delete-context ks-core || true
KUBECONFIG=~/.kube/config kubectl config delete-context ks-edge-cluster1 || true
KUBECONFIG=~/.kube/config kubectl config delete-context ks-edge-cluster2 || true
important: rename the kubernetes contexts of the Kind clusters to match their use in this guide
if you apply the ingress and then receive an error while waiting:
error: no matching resources found
this might mean that you did not wait long enough before issuing the check command. Simply try the check command again.
on Debian, the syncers on ks-edge-cluster1 and ks-edge-cluster2 will not resolve the kubestellar.core hostname
You have 2 choices:
-
Use the value of
hostname -f
instead of kubestellar.core as your "EXTERNAL_HOSTNAME" in "Step 1: Deploy the KubeStellar Core Component", or -
Just before step 6 in the KubeStellar User Quickstart for Kind do the following
Add IP/domain to /etc/hosts of cluster1/cluster2 containers (replace with appropriate IP address):
docker exec -it $(docker ps | grep ks-edge-cluster1 | cut -d " " -f 1) \
sh -c "echo '192.168.122.144 kubestellar.core' >> /etc/hosts"
docker exec -it $(docker ps | grep ks-edge-cluster2 | cut -d " " -f 1) \
sh -c "echo '192.168.122.144 kubestellar.core' >> /etc/hosts"
Edit coredns ConfigMap for cluster1 and cluster1 (see added lines in example):
KUBECONFIG=~/.kube/config kubectl edit cm coredns -n kube-system --context=ks-edge-cluster1
KUBECONFIG=~/.kube/config kubectl edit cm coredns -n kube-system --context=ks-edge-cluster2
add the highlighted information
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
hosts /etc/coredns/customdomains.db core {
fallthrough
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
customdomains.db: |
192.168.122.144 kubestellar.core
kind: ConfigMap
metadata:
creationTimestamp: "2023-10-24T19:18:05Z"
name: coredns
namespace: kube-system
resourceVersion: "10602"
uid: 3930c18f-23e8-4d0b-9ddf-658fdf3cb20f
Edit Deployment for coredns on cluster1 and cluster2, adding the key/path at the given location:
KUBECONFIG=~/.kube/config kubectl edit -n kube-system \
deployment coredns --context=ks-edge-cluster1
KUBECONFIG=~/.kube/config kubectl edit -n kube-system \
deployment coredns --context=ks-edge-cluster2
Restart coredns pods:
KUBECONFIG=~/.kube/config kubectl rollout restart \
-n kube-system deployment/coredns --context=ks-edge-cluster1
KUBECONFIG=~/.kube/config kubectl rollout restart \
-n kube-system deployment/coredns --context=ks-edge-cluster2
(adapted from "The Cluster-wise solution" at https://stackoverflow.com/questions/37166822/is-there-a-way-to-add-arbitrary-records-to-kube-dns)
1. Deploy the KUBESTELLAR Core component#
deploy the KubeStellar Core components on the ks-core Kind cluster you created in the pre-req section above
KUBECONFIG=~/.kube/config kubectl config use-context ks-core
KUBECONFIG=~/.kube/config kubectl create namespace kubestellar
helm repo add kubestellar https://helm.kubestellar.io
helm repo update
KUBECONFIG=~/.kube/config helm install kubestellar/kubestellar-core \
--set EXTERNAL_HOSTNAME="kubestellar.core" \
--set EXTERNAL_PORT=1119 \
--namespace kubestellar \
--generate-name
important: You must add 'kubestellar.core' to your /etc/hosts file with the local network IP address (e.g., 192.168.x.y) where your ks-core Kind cluster is running. DO NOT use 127.0.0.1
because the ks-edge-cluster1 and ks-edge-cluster2 kind clusters map 127.0.0.1
to their local kubernetes cluster, not the ks-core kind cluster.
run the following to wait for KubeStellar to be ready to take requests:
echo -n 'Waiting for KubeStellar to be ready'
while ! KUBECONFIG=~/.kube/config kubectl exec $(KUBECONFIG=~/.kube/config kubectl get pod \
--selector=app=kubestellar -o jsonpath='{.items[0].metadata.name}' -n kubestellar) \
-n kubestellar -c init -- ls /home/kubestellar/ready &> /dev/null; do
sleep 10
echo -n "."
done
echo; echo; echo "KubeStellar is now ready to take requests"
Checking the initialization log to see if there are any obvious errors:
KUBECONFIG=~/.kube/config kubectl config use-context ks-core
kubectl logs \
$(kubectl get pod --selector=app=kubestellar \
-o jsonpath='{.items[0].metadata.name}' -n kubestellar) \
-n kubestellar -c init
2. Install KUBESTELLAR's user commands and kubectl plugins#
The following commands will (a) download the kcp and KubeStellar executables into subdirectories of your current working directory
3. View your KUBESTELLAR Core Space environment#
Let's store the KubeStellar kubeconfig to a file we can reference later and then check out the Spaces KubeStellar created during installation
Did you received the following error:
`Error: Get "https://some_hostname.some_domain_name:1119/clusters/root/apis/tenancy.kcp.io/v1alpha1/workspaces": dial tcp: lookup some_hostname.some_domain_name on x.x.x.x: no such host
A common error occurs if you set your port number to a pre-occupied port number and/or you set your EXTERNAL_HOSTNAME to something other than "localhost" so that you can reach your KubeStellar Core from another host, check the following:
Check if the port specified in the ks-core Kind cluster configuration and the EXTERNAL_PORT helm value are occupied by another application:
1. is the `hostPort`` specified in the ks-core Kind cluster configuration is occupied by another process? If so, delete the ks-core Kind cluster and create it again using an available port for your 'hostPort' value
2. if you change the port for your ks-core 'hostPort', remember to also use that port as the helm 'EXTERNAL_PORT' value
Check that your EXTERNAL_HOSTNAME helm value is reachable via DNS:
1. use 'nslookup
2. make sure your EXTERNAL_HOSTNAME and associated IP address are listed in your /etc/hosts file.
3. make sure the IP address is associated with the system where you have deployed the ks-core Kind cluster
if there is nothing obvious, open a bug report and we can help you out
4. Install KUBESTELLAR Syncers on your Edge Clusters#
prepare KubeStellar Syncers, with kubestellar prep-for-cluster
, for ks-edge-cluster1 and ks-edge-cluster2 and then apply the files that kubestellar prep-for-cluster
prepared for you
important: make sure you created Kind clusters for ks-edge-cluster1 and ks-edge-cluster2 from the pre-req step above before proceeding how-to-deploy-and-use-kubestellar
KUBECONFIG=ks-core.kubeconfig kubectl kubestellar prep-for-cluster --imw root:imw1 ks-edge-cluster1 \
env=ks-edge-cluster1 \
location-group=edge #add ks-edge-cluster1 and ks-edge-cluster2 to the same group
KUBECONFIG=ks-core.kubeconfig kubectl kubestellar prep-for-cluster --imw root:imw1 ks-edge-cluster2 \
env=ks-edge-cluster2 \
location-group=edge #add ks-edge-cluster1 and ks-edge-cluster2 to the same group
#apply ks-edge-cluster1 syncer
KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster1 apply -f ks-edge-cluster1-syncer.yaml
sleep 3
KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster1 get pods -A | grep kubestellar #check if syncer deployed to ks-edge-cluster1 correctly
#apply ks-edge-cluster2 syncer
KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster2 apply -f ks-edge-cluster2-syncer.yaml
sleep 3
KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster2 get pods -A | grep kubestellar #check if syncer deployed to ks-edge-cluster2 correctly
5. Deploy an Apache Web Server to ks-edge-cluster1 and ks-edge-cluster2#
KubeStellar's helm chart automatically creates a Workload Management
Workspace (WMW) for you to store kubernetes workload descriptions and KubeStellar control objects in. The automatically created WMW is at root:wmw1
.
Create an EdgePlacement control object to direct where your workload runs using the 'location-group=edge' label selector. This label selector's value ensures your workload is directed to both clusters, as they were labeled with 'location-group=edge' when you issued the 'kubestellar prep-for-cluster' command above.
In the root:wmw1
workspace create the following EdgePlacement
object:
KUBECONFIG=ks-core.kubeconfig kubectl ws root:wmw1
KUBECONFIG=ks-core.kubeconfig kubectl apply -f - <<EOF
apiVersion: edge.kubestellar.io/v2alpha1
kind: EdgePlacement
metadata:
name: my-first-edge-placement
spec:
locationSelectors:
- matchLabels: {"location-group":"edge"}
downsync:
- apiGroup: ""
resources: [ configmaps ]
namespaces: [ my-namespace ]
objectNames: [ "*" ]
- apiGroup: apps
resources: [ deployments ]
namespaces: [ my-namespace ]
objectNames: [ my-first-kubestellar-deployment ]
- apiGroup: apis.kcp.io
resources: [ apibindings ]
namespaceSelectors: []
objectNames: [ "bind-kubernetes", "bind-apps" ]
EOF
check if your edgeplacement was applied to the ks-core kubestellar
namespace correctly
KUBECONFIG=ks-core.kubeconfig kubectl ws root:wmw1
KUBECONFIG=ks-core.kubeconfig kubectl get edgeplacements -n kubestellar -o yaml
Now, apply the HTTP server workload definition into the WMW on ks-core. Note the namespace label matches the label in the namespaceSelector for the EdgePlacement (my-first-edge-placement
) object created above.
KUBECONFIG=ks-core.kubeconfig kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: my-namespace
name: httpd-htdocs
data:
index.html: |
<!DOCTYPE html>
<html>
<body>
This web site is hosted on ks-edge-cluster1 and ks-edge-cluster2.
</body>
</html>
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: my-namespace
name: my-first-kubestellar-deployment
spec:
selector: {matchLabels: {app: common} }
template:
metadata:
labels: {app: common}
spec:
containers:
- name: httpd
image: library/httpd:2.4
ports:
- name: http
containerPort: 80
hostPort: 8081
protocol: TCP
volumeMounts:
- name: htdocs
readOnly: true
mountPath: /usr/local/apache2/htdocs
volumes:
- name: htdocs
configMap:
name: httpd-htdocs
optional: false
EOF
check if your configmap and deployment was applied to the ks-core my-namespace
namespace correctly
6. View the Apache Web Server running on ks-edge-cluster1 and ks-edge-cluster2#
Now, let's check that the deployment was created in the kind ks-edge-cluster1 cluster (it may take up to 30 seconds to appear):
you should see output including:
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
my-namespace my-first-kubestellar-deployment 1/1 1 1 6m48s
And, check the ks-edge-cluster2 kind cluster for the same:
you should see output including:
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
my-namespace my-first-kubestellar-deployment 1/1 1 1 7m54s
Finally, let's check that the workload is working in both clusters: For ks-edge-cluster1:
while [[ $(KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster1 get pod \
-l "app=common" -n my-namespace -o jsonpath='{.items[0].status.phase}') != "Running" ]]; do
sleep 5;
done;
curl http://localhost:8094
you should see the output:
<!DOCTYPE html>
<html>
<body>
This web site is hosted on ks-edge-cluster1 and ks-edge-cluster2.
</body>
</html>
For ks-edge-cluster2:
while [[ $(KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster2 get pod \
-l "app=common" -n my-namespace -o jsonpath='{.items[0].status.phase}') != "Running" ]]; do
sleep 5;
done;
curl http://localhost:8096
you should see the output:
KUBECONFIG=~/.kube/config kubectl config use-context ks-edge-cluster1
ks_ns_edge_cluster1=$(KUBECONFIG=~/.kube/config kubectl get namespaces \
-o custom-columns=:metadata.name | grep 'kubestellar-')
KUBECONFIG=~/.kube/config kubectl logs pod/$(kubectl get pods -n $ks_ns_edge_cluster1 \
-o custom-columns=:metadata.name | grep 'kubestellar-') -n $ks_ns_edge_cluster1
and on the ks-edge-cluster2 Kind cluster:
KUBECONFIG=~/.kube/config kubectl config use-context ks-edge-cluster2
ks_ns_edge_cluster2=$(KUBECONFIG=~/.kube/config kubectl get namespaces \
-o custom-columns=:metadata.name | grep 'kubestellar-')
KUBECONFIG=~/.kube/config kubectl logs pod/$(kubectl get pods -n $ks_ns_edge_cluster2 \
-o custom-columns=:metadata.name | grep 'kubestellar-') -n $ks_ns_edge_cluster2
If you see a connection refused
error in either KubeStellar Syncer log(s):
E1021 21:22:58.000110 1 reflector.go:138] k8s.io/client-go@v0.0.0-20230210192259-aaa28aa88b2d/tools/cache/reflector.go:215: Failed to watch *v2alpha1.EdgeSyncConfig: failed to list *v2alpha1.EdgeSyncConfig: Get "https://kubestellar.core:1119/apis/edge.kubestellar.io/v2alpha1/edgesyncconfigs?limit=500&resourceVersion=0": dial tcp 127.0.0.1:1119: connect: connection refused
it means that your /etc/hosts
does not have a proper IP address (NOT 127.0.0.1
) listed for the kubestellar.core
hostname. Once there is a valid address in /etc/hosts
for kubestellar.core
, the syncer will begin to work properly and pull the namespace, deployment, and configmap from this instruction set.
Mac OS users may also experience issues when stealth mode
(system settings/firewall). If you decide to disable this mode temporarily, please be sure to re-enable it once you are finished with this guide.
7. Check the status of your Apache Server on ks-edge-cluster1 and ks-edge-cluster2#
what's next...
how to upsync a resource
how to create, but not overwrite/update a synchronized resource
#
How to use an existing KUBESTELLAR environment#
1. Install KUBESTELLAR's user commands and kubectl plugins#
The following commands will (a) download the kcp and KubeStellar executables into subdirectories of your current working directory
2. View your KUBESTELLAR Core Space environment#
Let's store the KubeStellar kubeconfig to a file we can reference later and then check out the Spaces KubeStellar created during installation
Did you received the following error:
`Error: Get "https://some_hostname.some_domain_name:1119/clusters/root/apis/tenancy.kcp.io/v1alpha1/workspaces": dial tcp: lookup some_hostname.some_domain_name on x.x.x.x: no such host
A common error occurs if you set your port number to a pre-occupied port number and/or you set your EXTERNAL_HOSTNAME to something other than "localhost" so that you can reach your KubeStellar Core from another host, check the following:
Check if the port specified in the ks-core Kind cluster configuration and the EXTERNAL_PORT helm value are occupied by another application:
1. is the `hostPort`` specified in the ks-core Kind cluster configuration is occupied by another process? If so, delete the ks-core Kind cluster and create it again using an available port for your 'hostPort' value
2. if you change the port for your ks-core 'hostPort', remember to also use that port as the helm 'EXTERNAL_PORT' value
Check that your EXTERNAL_HOSTNAME helm value is reachable via DNS:
1. use 'nslookup
2. make sure your EXTERNAL_HOSTNAME and associated IP address are listed in your /etc/hosts file.
3. make sure the IP address is associated with the system where you have deployed the ks-core Kind cluster
if there is nothing obvious, open a bug report and we can help you out
3. Deploy an Apache Web Server to ks-edge-cluster1 and ks-edge-cluster2#
KubeStellar's helm chart automatically creates a Workload Management
Workspace (WMW) for you to store kubernetes workload descriptions and KubeStellar control objects in. The automatically created WMW is at root:wmw1
.
Create an EdgePlacement control object to direct where your workload runs using the 'location-group=edge' label selector. This label selector's value ensures your workload is directed to both clusters, as they were labeled with 'location-group=edge' when you issued the 'kubestellar prep-for-cluster' command above.
In the root:wmw1
workspace create the following EdgePlacement
object:
KUBECONFIG=ks-core.kubeconfig kubectl ws root:wmw1
KUBECONFIG=ks-core.kubeconfig kubectl apply -f - <<EOF
apiVersion: edge.kubestellar.io/v2alpha1
kind: EdgePlacement
metadata:
name: my-first-edge-placement
spec:
locationSelectors:
- matchLabels: {"location-group":"edge"}
downsync:
- apiGroup: ""
resources: [ configmaps ]
namespaces: [ my-namespace ]
objectNames: [ "*" ]
- apiGroup: apps
resources: [ deployments ]
namespaces: [ my-namespace ]
objectNames: [ my-first-kubestellar-deployment ]
- apiGroup: apis.kcp.io
resources: [ apibindings ]
namespaceSelectors: []
objectNames: [ "bind-kubernetes", "bind-apps" ]
EOF
check if your edgeplacement was applied to the ks-core kubestellar
namespace correctly
KUBECONFIG=ks-core.kubeconfig kubectl ws root:wmw1
KUBECONFIG=ks-core.kubeconfig kubectl get edgeplacements -n kubestellar -o yaml
Now, apply the HTTP server workload definition into the WMW on ks-core. Note the namespace label matches the label in the namespaceSelector for the EdgePlacement (my-first-edge-placement
) object created above.
KUBECONFIG=ks-core.kubeconfig kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: my-namespace
name: httpd-htdocs
data:
index.html: |
<!DOCTYPE html>
<html>
<body>
This web site is hosted on ks-edge-cluster1 and ks-edge-cluster2.
</body>
</html>
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: my-namespace
name: my-first-kubestellar-deployment
spec:
selector: {matchLabels: {app: common} }
template:
metadata:
labels: {app: common}
spec:
containers:
- name: httpd
image: library/httpd:2.4
ports:
- name: http
containerPort: 80
hostPort: 8081
protocol: TCP
volumeMounts:
- name: htdocs
readOnly: true
mountPath: /usr/local/apache2/htdocs
volumes:
- name: htdocs
configMap:
name: httpd-htdocs
optional: false
EOF
check if your configmap and deployment was applied to the ks-core my-namespace
namespace correctly
4. View the Apache Web Server running on ks-edge-cluster1 and ks-edge-cluster2#
Now, let's check that the deployment was created in the kind ks-edge-cluster1 cluster (it may take up to 30 seconds to appear):
you should see output including:
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
my-namespace my-first-kubestellar-deployment 1/1 1 1 6m48s
And, check the ks-edge-cluster2 kind cluster for the same:
you should see output including:
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
my-namespace my-first-kubestellar-deployment 1/1 1 1 7m54s
Finally, let's check that the workload is working in both clusters: For ks-edge-cluster1:
while [[ $(KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster1 get pod \
-l "app=common" -n my-namespace -o jsonpath='{.items[0].status.phase}') != "Running" ]]; do
sleep 5;
done;
curl http://localhost:8094
you should see the output:
<!DOCTYPE html>
<html>
<body>
This web site is hosted on ks-edge-cluster1 and ks-edge-cluster2.
</body>
</html>
For ks-edge-cluster2:
while [[ $(KUBECONFIG=~/.kube/config kubectl --context ks-edge-cluster2 get pod \
-l "app=common" -n my-namespace -o jsonpath='{.items[0].status.phase}') != "Running" ]]; do
sleep 5;
done;
curl http://localhost:8096
you should see the output:
KUBECONFIG=~/.kube/config kubectl config use-context ks-edge-cluster1
ks_ns_edge_cluster1=$(KUBECONFIG=~/.kube/config kubectl get namespaces \
-o custom-columns=:metadata.name | grep 'kubestellar-')
KUBECONFIG=~/.kube/config kubectl logs pod/$(kubectl get pods -n $ks_ns_edge_cluster1 \
-o custom-columns=:metadata.name | grep 'kubestellar-') -n $ks_ns_edge_cluster1
and on the ks-edge-cluster2 Kind cluster:
KUBECONFIG=~/.kube/config kubectl config use-context ks-edge-cluster2
ks_ns_edge_cluster2=$(KUBECONFIG=~/.kube/config kubectl get namespaces \
-o custom-columns=:metadata.name | grep 'kubestellar-')
KUBECONFIG=~/.kube/config kubectl logs pod/$(kubectl get pods -n $ks_ns_edge_cluster2 \
-o custom-columns=:metadata.name | grep 'kubestellar-') -n $ks_ns_edge_cluster2
If you see a connection refused
error in either KubeStellar Syncer log(s):
E1021 21:22:58.000110 1 reflector.go:138] k8s.io/client-go@v0.0.0-20230210192259-aaa28aa88b2d/tools/cache/reflector.go:215: Failed to watch *v2alpha1.EdgeSyncConfig: failed to list *v2alpha1.EdgeSyncConfig: Get "https://kubestellar.core:1119/apis/edge.kubestellar.io/v2alpha1/edgesyncconfigs?limit=500&resourceVersion=0": dial tcp 127.0.0.1:1119: connect: connection refused
it means that your /etc/hosts
does not have a proper IP address (NOT 127.0.0.1
) listed for the kubestellar.core
hostname. Once there is a valid address in /etc/hosts
for kubestellar.core
, the syncer will begin to work properly and pull the namespace, deployment, and configmap from this instruction set.
Mac OS users may also experience issues when stealth mode
(system settings/firewall). If you decide to disable this mode temporarily, please be sure to re-enable it once you are finished with this guide.
5. Check the status of your Apache Server on ks-edge-cluster1 and ks-edge-cluster2#
Every object subject to downsync or upsync has a full per-WEC copy in
the core. These include reported state from the WECs. If you are using
release 0.10 or later of KubeStellar then you can list these copies of
your httpd Deployment
objects with the following command.
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
... (lots of other details) ...
name: my-first-kubestellar-deployment
namespace: my-namespace
spec:
... (the spec) ...
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2023-10-27T07:00:19Z"
lastUpdateTime: "2023-10-27T07:00:19Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2023-10-27T07:00:19Z"
lastUpdateTime: "2023-10-27T07:00:19Z"
message: ReplicaSet "my-first-kubestellar-deployment-76f6fc4cfc" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 618
readyReplicas: 1
replicas: 1
updatedReplicas: 1
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
... (lots of other details) ...
name: my-first-kubestellar-deployment
namespace: my-namespace
spec:
... (the spec) ...
status:
... (another happy status) ...